跳转至主要内容
英特尔标志 - 返回主页
我的工具

选择您的语言

  • Bahasa Indonesia
  • Deutsch
  • English
  • Español
  • Français
  • Português
  • Tiếng Việt
  • ไทย
  • 한국어
  • 日本語
  • 简体中文
  • 繁體中文
登录 以访问受限制的内容

使用 Intel.com 搜索

您可以使用几种方式轻松搜索整个 Intel.com 网站。

  • 品牌名称: 酷睿 i9
  • 文件号: 123456
  • Code Name: Emerald Rapids
  • 特殊操作符: “Ice Lake”、Ice AND Lake、Ice OR Lake、Ice*

快速链接

您也可以尝试使用以下快速链接查看最受欢迎搜索的结果。

  • 产品信息
  • 支持
  • 驱动程序和软件

最近搜索

登录 以访问受限制的内容

高级搜索

仅搜索

Sign in to access restricted content.

不建议将您正在使用的浏览器版本用于此网站。
请考虑点击以下链接之一升级到该浏览器的最新版本。

  • Safari
  • Chrome
  • Edge
  • Firefox

AI Frameworks and Tools


Software tools at all levels of the AI stack unlock the full capabilities of your Intel hardware. All Intel AI tools and frameworks are built on the foundation of a standards-based, unified oneAPI programming model that helps you get you the most performance from your end-to-end pipeline on all your available hardware.

 

 

 

  • Tools
  • Deep Learning Frameworks
  • Machine Learning Frameworks
  • Libraries

Productive, easy -to-use AI tools and suites span multiple stages of the AI pipeline, including data engineering, training, fine-tuning, optimization, inference, and deployment.

AI Tool Selector

Products are grouped to meet common AI workloads like machine learning, deep learning, and inference optimization. You can also customize them to choose only the tools you need from conda*, pip, and Docker* repositories. A full offline installer is also available.

Configure & Download
  • Optimized frameworks, a model repository, and model optimization for deep learning
  • Extensions for scikit-learn* and XGBoost for machine learning
  • Accelerated data analytics through Intel contributions to Modin*, a drop-in replacement for pandas
  • Optimized core Python* libraries
  • Samples for end-to-end workloads
  • Perform model compression with a framework-independent API

 

 

OpenVINO™ Toolkit

Write Once, Deploy Anywhere

Deploy high-performance inference applications from device to cloud, powered by oneAPI. Optimize, tune, and run comprehensive AI inference using the included optimizer, runtime, and development tools. The toolkit includes:

  • Repository of open source, pretrained, and preoptimized models ready for inference
  • Model optimizer for your trained model
  • Inference engine to run inference and output results on multiple processors, accelerators, and environments with a write-once, deploy-anywhere efficiency

Learn More

 

 

Intel Gaudi Software

Speed up AI Development

Get access to the Habana SynapseAI® development software stack, which supports TensorFlow and PyTorch frameworks.

  • Software optimized for Deep Learning training & inference
  • Integrates popular frameworks: TensorFlow and PyTorch
  • Provides custom graph compiler
  • Supports custom kernel development
  • Enables ecosystem of software partners
  • Habana GitHub & Community Forum

Learn More

BigDL

Scale your AI models seamlessly to big data clusters with thousands of nodes for distributed training or inference.

Learn More

Intel® Distribution for Python*

Develop fast, performant Python code with a set of essential scientific and Intel®-optimized computational packages, including NumPy, SciPy*, Numba*, and others.

Learn More

Intel® AI Reference Models

Access a repository of pretrained models, sample scripts, best practices, and step-by-step tutorials for many popular open source, machine learning models optimized to run on Intel hardware.

Learn More

显示更多 显示较少

Intel® Tiber™ Portfolio


 

Intel® Tiber™ Developer Cloud1

Build and deploy AI at scale on a managed, high-performance and cost-effective cloud resources—and get to market faster. With Intel’s cloud, develop and optimize AI models and applications, run small- and large-scale training and inference workloads and deploy with best price-performance. 


1
Formerly Intel® Developer Cloud

 

Intel® Tiber™ Edge Platform

Build, deploy, run, manage, and scale edge and AI solutions on standard hardware with cloud-like simplicity. Built on extensive edge expertise, it’s designed for the most demanding edge use cases and to accelerate edge AI development while reducing costs.

 

Intel® Tiber™ AI Studio2

Streamline the AI model lifecycle to create better models for your business and reduce time managing hardware and software. Use the MLOps platform to automate retraining and create more efficient workflows for a greater impact from AI.


2
 Formerly cnvrg.io


 

Open source deep learning frameworks run with high performance across Intel devices through optimizations powered by oneAPI, along with open source contributions by Intel.

PyTorch*

PyTorch* is an AI and machine learning framework based on Python, and is popular for use in both research and production. Intel contributes optimizations to the PyTorch Foundation to accelerate PyTorch on Intel processors. The newest optimizations, as well as usability features, are first released in Intel® Extension for PyTorch* before they are incorporated into open source PyTorch.

Learn More | Get Started

TensorFlow*

TensorFlow* is used widely for AI development and deployment. Its primary API is based on Python*, and it also offers APIs for a variety of languages such as C++, JavaScript*, and Java*. Intel collaborates with Google* to optimize TensorFlow for Intel processors. The newest optimizations and features are often released in Intel® Extension for TensorFlow* before they become available in open source TensorFlow.

Learn More | Get Started

JAX

JAX is an open source Python library designed for complex numerical computations on high-performance devices like GPUs and TPUs (tensor processing units). It supports NumPy functions and provides automatic differentiation, as well as a composable function transformation system to build and train neural networks. JAX is supported on Intel processors using Intel Extension for TensorFlow.

Learn More | Get Started

DeepSpeed

DeepSpeed is an open source, deep learning optimization software suite. It accelerates training and inference of large models by automating parallelism, optimizing communication, managing heterogeneous memory, and model compression. DeepSpeed supports Intel CPUs, Intel GPUs, and Intel® Gaudi® AI accelerators.

Learn More | Get Started

PaddlePaddle*

This open source, deep learning Python framework from Baidu* is known for user-friendly, scalable operations. Built using Intel® oneAPI Deep Neural Network Library (oneDNN), this popular framework provides fast performance on Intel Xeon Scalable processors and a large collection of tools to help AI developers.

Learn More | Get Started

显示更多 显示较少

Classical machine learning algorithms in open source frameworks utilize oneAPI libraries. Intel also offers further optimizations in extensions to these frameworks.

scikit-learn*

scikit-learn* is one of the most widely used Python packages for data science and machine learning. Intel® Extension for Scikit-learn* provides a seamless way to speed up many scikit-learn algorithms on Intel CPUs and GPUs, both single- and multi-node.

Learn More

XGBoost

XGBoost is an open source, gradient boosting, machine learning library that performs well across a variety of data and problem types. Intel contributes software accelerations powered by oneAPI directly to open source XGBoost, without requiring any code changes.

Learn More | Get Started

oneAPI libraries deliver code and performance portability across hardware vendors and accelerator technologies.

Intel® oneAPI Deep Neural Network Library

Deliver optimized neural network building blocks for deep learning applications.

Learn More

Intel® oneAPI Data Analytics Library

Help speed up big-data analysis by providing highly optimized algorithmic building blocks for all stages of data analytics.

Learn More

Intel® oneAPI Math Kernel Library

Accelerate math-processing routines, increase science, engineering, and financial application performance, and reduce development time.

Learn More

Intel® oneAPI Collective Communications Library

Use this scalable, high-performance communication library for deep learning and machine learning workloads.

Learn More

显示更多 显示较少

Develop, train, and deploy your AI solutions quickly with performance- and productivity-optimized tools from Intel. 

Stay Up to Date on AI Workload Optimization.


Sign Up

除非标为可选,否则所有字段均为必填。

英特尔致力于为您提供优质、个性化的体验,您的数据帮助我们实现这一目标。
本网站采用了 reCAPTCHA 保护机制,并且适用谷歌隐私政策和服务条款。
提交此表单,即表示您确认自己已经年满 18 周岁。英特尔将针对此业务请求处理您的个人数据。要详细了解英特尔的实践,包括如何管理您的偏好和设置,请访问英特尔的隐私声明。
提交此表单,即表示您确认自己已经年满 18 周岁。 英特尔可能会与您联系,以进行与营销相关的沟通。您可以随时选择退出。要详细了解英特尔的实践,包括如何管理您的偏好和设置,请访问英特尔的隐私声明。

You’re In!

Thank you for signing up. Watch for a welcome email to get you started.

  • 公司信息
  • 英特尔资本
  • 企业责任部
  • 投资者关系
  • 联系我们
  • 新闻发布室
  • 网站地图
  • 招贤纳士 (英文)
  • © 英特尔公司
  • 沪 ICP 备 18006294 号-1
  • 使用条款
  • *商标
  • Cookie
  • 隐私条款
  • 请勿分享我的个人信息 California Consumer Privacy Act (CCPA) Opt-Out Icon

英特尔技术可能需要支持的硬件、软件或服务激活。// 没有任何产品或组件能够做到绝对安全。// 您的成本和结果可能会有所不同。// 性能因用途、配置和其他因素而异。// 请参阅我们的完整法律声明和免责声明。// 英特尔致力于尊重人权,并避免成为侵犯人权行为的同谋。请参阅英特尔的《全球人权原则》。英特尔产品和软件仅可用于不会导致或有助于任何国际公认的侵犯人权行为的应用。

英特尔页脚标志