Microsoft Phi-3 GenAI Models Are Accelerated by Intel AI Platforms
In partnership with Microsoft, Intel has made a number of Phi-3 models compatible with its edge solutions, AI PCs, and data center systems.
What’s New: For several Microsoft Phi-3 family of open models, Intel has streamlined and verified its AI product range across client, edge, and data center. The Phi-3 family of open-source, tiny models can be quickly adjusted to meet individual needs, operate on less powerful hardware, and allow developers to create locally executable apps.
Intel’s supported products include Intel® Gaudi® AI accelerators and Intel® Xeon® processors for data center applications and Intel® Core™ Ultra processors and Intel® Arc™ graphics for client.
“We provide customers and developers with powerful AI solutions that utilize the industry’s latest AI models and software. Our active collaboration with fellow leaders in the AI software ecosystem, like Microsoft, is key to bringing AI everywhere. We’re proud to work closely with Microsoft to ensure Intel hardware – spanning data center, edge and client – actively supports several new Phi-3 models.”
-Pallavi Mahajan, general manager of Data Center and AI software at Intel and corporate vice president
Why This Is Important: Working with AI pioneers and innovators, Intel consistently invests in the AI software ecosystem as part of its commitment to bring AI everywhere.
For its central processor units (CPUs), graphics processing units (GPUs), and Intel Gaudi accelerators, Intel collaborated with Microsoft to enable Phi-3 model support on launch day. Along with co-designing DeepSpeed, an intuitive package of software for deep learning optimization, Intel also expanded Hugging Face’s automatic tensor parallelism support for Phi-3 and other models.
Phi-3 models’ compact size makes them ideal for on-device inference, enabling lightweight model building on AI PCs and edge devices, such as fine-tuning or customization. Comprehensive software frameworks and tools, such as PyTorch and Intel® Extension for PyTorch, which are used for local research and development, and OpenVINOTM Toolkit, which is used for model deployment and inference, speed the development of Intel client hardware.
Next Up: In order to satisfy the generative AI demands of its business clients, Intel is dedicated to providing Phi-3 and other cutting-edge language models with support and software optimization.
Discover more from Postbox Live
Subscribe to get the latest posts sent to your email.