AI & Machine Learning Ecosystem Developer Resources
Developers at industry-leading independent software vendors (ISV), system integrators (SI), original equipment manufacturers (OEM), and enterprise end users use Intel® tools and framework optimizations to build their AI platforms, systems, and applications. The Intel® AI Portfolio helps deliver performance and productivity at scale while making it seamless for developers and data scientists to accelerate their AI journey from the edge to the cloud.
Since 2014, Accenture* and Intel have come together to help customers realize positive change. Together, we accelerate transformation through co-innovation and capabilities alignment to deliver consistent client outcomes at diverse companies.
Anaconda*
The Anaconda* open source repository is embedded in Intel® AI and machine learning products, including Intel® Distribution for Python* and AI Tools. Intel® Software Guard Extensions (Intel® SGX) and Anaconda software enables data scientists to run open source code in a hardware-protected environment.
Hugging Face* and Intel collaborate to build state-of-the-art hardware and software AI acceleration to train, fine-tune, and predict with transformer models. Intel tools, including AI Tools, Intel® Neural Compressor, Intel® Distribution of OpenVINO™ toolkit, and SigOpt*, deliver software acceleration to the Hugging Face Optimum library.
IBM* and Intel have long collaborated on data and AI products and have been working together on embeddable AI for the past year. The improved IBM Watson* NLP Library for Embed takes advantage of Intel® AI software integration, powered by oneAPI, and the new Intel® Xeon® Scalable processors
PyTorch* Foundation
Intel is honored to join the PyTorch* Foundation as a premier member. Its contributions to PyTorch started in 2018 with the vision to democratize access to AI through ubiquitous hardware and open software.
Testimonials
"Through our partnership with Intel, we have helped clients improve their total cost of ownership and performance by leveraging best in class hardware and software. Intel's seamless product integration has allowed our customers to provide the highest quality end user experiences. Intel's developer documentation makes it simple to share software such as AI Tools (powered by oneAPI), cnvrg.io, SigOpt, and many more with our massive data science community. Intel incorporates ease of use in their product lineup. With oneAPI, engineers can train, score, and deploy models in a production environment with improved accuracy and performance. This consistent and rewarding experience across the product suite makes Intel a competitive choice for AI workloads."
— Ramtin Davanlou, chief technology officer, Accenture
"AI Tools were extremely easy to use. With just a few hours of mostly configuration work, we were able to use it to significantly improve the performance of our machine learning code. This allowed us to analyze larger datasets on the same size compute resources and significantly reduce the carbon footprint of our model training. It was so easy to use, secure, flexible, and scalable that you don't have any reason to not try it today."
— Arijit Sengupta, founder and CEO, Aible*
30 Days to AI Success (and Often in Merely Five)
30 Days to AI Value: Development Best Practices from Intel and Aible
Intel Teams Up with Aible to Fast-Track Enterprise Analytics and AI
"Through a strong and close partnership with Intel, we have helped our customers accelerate their online service greatly with Intel technology. By leveraging and integrating the key features of Intel Neural Compressor and Intel® Extension for Transformers* into Alibaba Cloud* PAI-Blade, we offer extremely high performance and reduce the total cost of ownership (TCO). These tools provide a high-performance solution for model optimization and optimized-aware inference, which help PAI-Blade extremely easy to adopt optimization like int8 for better performance without accuracy loss. We believe our ongoing collaboration with Intel will bring more benefits to AI workloads and services."
— Shen Li, staff algorithm engineer, Alibaba Cloud
"By integrating Intel® oneAPI Data Analytics Library (oneDAL) and AI Tools into Allegro Trains, Allegro AI offers better performance and optimized use of cloud instances."
— Moses Guttmann, chief technology officer and cofounder, Allegro
Using the Intel® Integrated Performance Primitives (Intel® IPP), ACF Performance Results are now providing 127x faster training performance and 66% reduction in overall cost of running the training algorithm in cloud environment, and with Intel® oneAPI Data Analytics Library (oneDAL), XGBoost was able to achieve 4x faster inferencing time.
"We're seeing encouraging early application performance results on our development systems using Intel® Data Center GPU Max Series—applications built with Intel's oneAPI compilers and libraries. For leadership-class computational science, we value the benefits of code portability from multivendor, multiarchitecture programming standards such as SYCL* and Python* AI frameworks such as PyTorch accelerated by Intel libraries. We look forward to the first exascale scientific discoveries from these technologies on the Aurora system next year."
— Dr. Timothy Williams, deputy director, Argonne Computational Science Division
"Analytics Zoo and AI Tools with the Intel oneAPI Data Analytics Library (oneDAL) helped reduce end-to-end data processing time and improved our prediction model’s accuracy significantly for AsiaInfo* 5G network intelligence including customer satisfaction analysis, power saving for 5G base station and user location analysis."
— Duozhi Zhu, general manager of 5G network product research and development department, AsiaInfo Technologies Limited
"Our successful collaboration with Intel centered around the optimization of state-of-the-art computer vision models for our UI Automation tool, in particular the analysis of user interfaces. Together, we focused on performance optimizations of our pipeline powered by oneAPI with OpenVINO on Intel CPUs, achieving considerable speed-ups in inference times. This secures fast executions of automations, and thus leads to significant time savings for our customers. We are thankful for the fruitful cooperation."
— Jonas Menesklou, CEO, askui
"We have achieved impressive efficiency gains by collaborating with Intel and Taiwan NTUNHS on implementing federated learning medical image recognition in ASUS's server products with 4th generation Intel Xeon Scalable processors. Integrating Intel® tools and Intel® Extension for PyTorch* into federated learning environments has resulted in a 13% increase in efficiency for AlexNet and a 30% increase for VGG19 in medical image recognition applications. With the help of Intel® VTune™ Profiler, we found that operation reordering by Intel® oneAPI Deep Neural Network Library (oneDNN) significantly reduced CPU computation time for deep learning model training. Intel's software optimization tools deliver values by enhancing performance in distributed machine learning environments without compromising data privacy."
— Paul Ju, corporate vice president and CTO of Data Center Solution, ASUS
"We are elated to leverage the power of CPU instances provided by Azure* Machine Learning to enable developers and data scientists to take advantage of Intel® AI optimizations powered by Intel® hardware. By integrating optimizations such as the Intel® Extension for Scikit-learn* powered by oneAPI into the platform, users can easily accelerate development and deployment of machine learning workloads for faster results and achieve a reduction in resource costs with just a few lines of code."
— Vijay Aski, partner director AI Platform, Microsoft*
"The Intel team's optimization of fMRI and PadChest models using Intel® Extension for PyTorch* and OpenVINO powered by oneAPI, leading to approximately 6x increase in performance, tailored for medical imaging, showcases best practices that do more than just accelerate running times. These enhancements not only cater to the unique demands of medical image processing but also offer the potential to reduce overall costs and bolster scalability."
— Santamaria-Pang Alberto, principal applied data scientist, Health AI at Microsoft
Maximize & Scale Azure Machine Learning Models with Intel AI Frameworks
A Closer Look at MLOps and the Intel Extension for Scikit-learn within Azure Machine Learning
“Our collaboration with Intel, especially in utilizing the Neural Chat 7B model and Intel® Liftoff, symbolizes a groundbreaking chapter for Bilic. This joint effort is not just about integrating an advanced AI model; it's about co-creating a robust, intelligent system for fraud detection. Intel Liftoff has played a crucial role in enabling us to seamlessly deploy and scale our solutions, ensuring that our systems are as agile and adaptive as the threats they are designed to counter. Together with Intel, we're forging a new frontier in AI-driven financial security, setting a new benchmark in the industry.”
— Saminu Salisu, CEO, Bilic
"Boston is part of the AI revolution as a technical solution, with AI being a large power-consumption machine bringing in additional pain for organizations to fulfill sustainability targets. Our AI application is powered by oneAPI AI Tools and its stack of optimized libraries such as Intel® Optimization for TensorFlow*, Intel® Distribution for Python*, Intel® oneAPI Collective Communications Library (oneCCL), Intel-optimized NumPy, Intel® Math Kernel Library, and Intel® Threading Building Blocks has ensured we get the best accelerated performance on CPU and GPU devices. We have noticed a significant decrease in power usage for each AI application to meet sustainability goals for the future."
— Laxmi Nageswari Varanasi, Global Head AI Education and Solutions, Boston IT Solutions India Pvt Ltd
"At byteLAKE, we specialize in advanced AI solutions for diverse industries: manufacturing, automotive, paper, chemical, energy, and restaurants. Our passion is turning data into insights that fuel product enhancement. AI efficiently utilizes data from various sources, enabling quality inspections, process optimization, and fault detection. Our strategic partnership with Intel ensures top-tier quality for industrial clients. Collaboration with Intel's experts and technologies like OpenVINO and Intel® Deep Learning Boost (Intel® DL Boost) with Vector Neural Network Instructions helped us optimize our products' performance. Notably, our cognitive services optimization achieved over 20x performance boost in manufacturing's AI-assisted visual inspection. Sound analytics for automotive quality inspection gained 1.12x to over 22x acceleration through Intel Extension for Scikit-learn integrations. Intel’s broad portfolio also helps us ensure consistent experience for our clients across deployments including edge devices, servers, and HPC infrastructures."
— Marcin Rojek, cofounder, byteLAKE
"Codeplay* Software is a world pioneer in enabling acceleration technologies used in AI, HPC, and automotive. Codeplay has been heavily involved in the definition of SYCL and helped to grow the ecosystem, providing evaluation platforms, resources, and workshops. With oneAPI building on SYCL, Intel gains all the benefits of an open standards-based ecosystem, while enhancing with extensions to embrace features and performance available to modern C++ developers."
— Andrew Richards, founder and CEO, Codeplay Software
"The Intel® oneAPI Base Toolkit and AI Tools improved our 3D model reconstruction's performance by up to 9x on an Intel® Xeon® platform compared to our existing GPU solution."
— Mr. Gao, research and development general manager, Daspatial†
"Intel provides the backbone for optimized AI workloads through tools and framework optimizations that are powered by oneAPI. Running DataRobot* on Intel makes it possible for our common customers to not just talk about AI—but to embrace it as a core part of their enterprise’s business and culture."
— Sirisha Kadamalakalva, chief strategy officer, DataRobot
"Deci.ai is redefining possibilities in the field of pose estimation with our groundbreaking model, YOLO-NAS Pose. Building upon the success of our open source object detection model, YOLO-NAS, this latest release includes a novel head design, which is optimized with Deci's AutoNAC for peak performance. We put YOLO-NAS Pose to the ultimate test by benchmarking it on the 4th generation Intel Xeon CPU, utilizing the oneAPI-powered OpenVINO toolkit. YOLO-NAS Pose surpassed the performance of YOLO v8 pose with an incredible 38% lower latency and 0.27% higher accuracy. YOLO-NAS Pose brings real-time pose estimation to the forefront—perfect for applications demanding quick, accurate AI insights.”
— Assaf Katan, chief business officer, Deci.ai
"We have had a great experience partnering with Intel on our complex and dynamic infrastructure. Their team was always willing to go the extra mile to make sure everything ran smoothly and that our needs were met. Intel's right hardware and AI software solution powered by oneAPI helped us to improve our processes and performance—especially when it came time to deploy updated models through Intel's oneAPI tools like the Intel® Neural Compressor and Intel® Optimization for PyTorch*. These significantly improved performance for our multilingual translation model, which was run on Azure’s Dv5 VMs powered with 3rd generation Intel® Xeon® Scalable processors showed the best performance (per €) and that’s why we deployed it into production. With the Intel Neural Compressor, Intel Optimization for PyTorch, and the right Intel hardware, we were able to increase the performance per € by 2.85x and even 6.25x for other models."
— Eugene Bondariev, CTO, Delphai
Intel and Delphai: Structuring the Business World So You Don't Have To
"Digital Cortex* and Intel are making XPUs as easy as CPUs so you can use the right device for each workload. No one device is the best for every job, so we include all of them, and with the power of oneAPI use each for when it's best. Digital Cortex's function as a service gives you an API to awesome, Intel-powered performance."
— Charlie Wardell, CEO and chief technology officer (CTO), Digital Cortex
“Guise AI models are optimized to run on edge leveraging Intel Distribution of OpenVINO toolkit along with Intel oneAPI powered tools and frameworks. Edge AI-enabled solutions offer rapid response times with low latency, high privacy, reduced data transfer costs, and more efficient use of network bandwidth while driving operational efficiency and increasing ROI. Optimizing with OpenVINO toolkit enables us to better serve our customers’ needs with powerful Predictive Maintenance and Intelligent Asset Management solutions built for the edge.”
— Naga Rayapati, founder and CEO, Guise AI
Intel, Red Hat, Guise AI, and OnLogic*: Bringing Intelligence to the Edge
"Hasty* and Intel are working together on computationally heavy vision AI tasks like small object detection and massive image analysis or a combination of these two challenges. Unlocking this capability will be a step-wise shift in the barrier of vision AI for critical industries such as agriculture, disaster recovery, logistics, and medical, to name a few. Our work has focused on the benefits of using CPUs and AI Tools for critical machine learning tasks like inference and data mining."
— Tristan Rouillard, CEO, Hasty
"We at HippoScreen* have been able to take advantage of the software optimizations in Intel® Extension for Scikit-learn* and Intel® Extension for PyTorch* to accelerate the build times for the AI models in our customized EEG Brain Waves analysis system by 2.4X. The Intel® VTune™ Profiler allowed us to quickly identify and rework threading oversubscription issues that were holding back our algorithms. The tools and framework optimizations in the Intel® oneAPI Base Toolkit and AI Tools provide a performant and productive way for us to build AI pipelines while also being efficient and adaptable to workflow changes."
— Daniel Weng, chief technology officer, HippoScreen Neurotech
"At Hugging Face, we are focused on making the latest advancements in AI more accessible to everyone. Making state-of-the-art machine learning models more efficient and cheaper to use is incredibly important to us, and we're proud to partner with Intel to make it easy for the community to get peak CPU performance, faster model training and advanced AI deployments on powerful Intel® hardware devices, using our free open source Optimum library, integrating OpenVINO, Intel Neural Compressor, Synapse AI*, and many more powerful solutions of AI Tools."
— Jeff Boudier, product director, Hugging Face
"Integrating TensorFlow* optimizations powered by Intel® oneAPI Deep Neural Network Library into the IBM Watson NLP Library for Embed led to an upwards of 165% improvement in function throughput on text and sentiment classification tasks on 4th generation Intel® Xeon® Scalable processors. This improvement in function throughput results in shorter duration of inference from the model, leading to quicker response time when embedding Watson NLP Library in our client’s offerings.”
— Bill Higgins, director of development for Watson AI in IBM Research
"Intel and IBM’s quarter-of-a-century collaboration on Db2* continues to deliver significant performance gains for enterprises. When running mission-critical workloads, either transactional or analytical, our clients choose IBM Db2 and Intel’s new 4th and 5th generation Intel Xeon Scalable processors, on premises or in the cloud, for leading performance and scalability."
— Vikram Murali, vice president, IBM Hybrid Data Management
ICURO and Intel have collaborated in the realm of robotics and visual analytical tools. The collaboration has reached new heights with the aid of Intel® Extension for PyTorch and AI Tools powered by oneAPI. The integration of Intel® Arc™ A770 graphics card has unleashed the power of active deep learning. Intel® Developer Cloud has played an invaluable role in facilitating testing and benchmarking. The collaboration between ICURO and Intel is transformative and offers extraordinary possibilities.
— Paul Baclace, chief AI architect, ICURO
"At Katana Graph*, we are building the best graph intelligence platform delivering highly scalable computations for machine learning and AI.
"I am proud of Katana Graph's partnership with AI Tools (powered by oneAPI) team as we tackle the most challenging pain points of data scientists, enabling critical discoveries for data scientists to perform predictive analytics on massive datasets and to develop specific applications across a range of industries including financial services, life sciences, manufacturing, and security.
"A terrific example of our combined work is in the field of Genomics where Katana Graph technology executed a 1.3 million cell genomic analysis on a next-gen Intel® Xeon® Scalable processor in 370 seconds, twice as fast as its closest competitor."
— Keshav Pingali, chief executive officer, Katana Graph
PyTorch 1.6, built using Intel® oneAPI Deep Neural Network Library, delivered up to 11.4X‡ faster inferencing for digital pathology medical screening.
"With the help of Intel, we were able to train, optimize, and deploy a machine learning model in a lesser time and at a lower operational cost than available alternatives, enabling us to get to market fast with a powerful solutions that's optimized for Intel® architecture. Specifically, using OpenVINO™ toolkit from the Intel® toolkits, we were able reduce the model size, which enabled us to deploy our solutions on edge devices."
— Ashok Ajad, technical lead, Medical Investment & Solutions, L&T Technology Services* (LTTS)
"We’re excited to be working closely with Intel through their oneAPI tool program. The vision of having a single unified programming model is a revolutionary approach that could fundamentally change how organizations deploy their workloads across a diverse set of accelerators and processors."
— Scott Tease, general manager, HPC & AI, Lenovo* Data Center Group
Lenovo Intelligent Computing Orchestration (LiCO) Is Now Powered by Intel® HPC Toolkit and AI Tools
LiCO is Lenovo's one-stop software solution for HPC and AI. By integrating Intel toolkits, LiCO customers can significantly improve the performance of their HPC and AI applications on cross-architecture platforms. LiCO now contains the Intel® MPI Library to help end customers reduce network latency, increase throughput, and get better performance on HPC programs. For performance analysis, LiCO customers have access to Intel® Advisor, Intel® Trace Analyzer and Collector, and Intel® VTune™ Profiler to identify bottlenecks and allow optimizations. Intel® Extension for TensorFlow* and Intel® Extension for PyTorch* accelerate AI programs on Intel CPUs and GPUs. Finally, the Intel® Neural Compressor can reduce complex AI models, producing smaller, faster models without losing accuracy.
"MATLAB* and Simulink* users are designing large systems with multidomain components that increasingly rely on AI. AI performance matters whether simulations are running on a host computer, deployed in the cloud, or at the edge. Intel oneAPI Deep Neural Network Library (oneDNN) enables our solution to deliver best-in-class performance on Intel platforms."
— Fred Smith, director of engineering, MathWorks*
"Intel® Toolkits helped increase our end-to-end application processing performance on Intel® Xeon® platforms. By using oneAPI technologies including Intel® Integrated Performance Primitives (Intel® IPP) and Intel® Extension for PyTorch*, which is a set of high-performance software libraries combined with hardware deep optimization providing a large number of signal processing, image processing, and other functions, we were able to significantly improve our image processing code performance by 2.7x in image rotation and 4x in image resize. This allows us to analyze larger image datasets, and builds the cutting-edge visual AI Inference solution for our end customers."
— Fei Pang, CTO, Meituan
"Our collaboration with Intel has shown that using the oneAPI-powered data analytics library (oneDAL) in ML.NET can accelerate end-to-end running times, including both training and inference, achieving up to 3x improvement. This partnership between open source projects like .NET and oneDAL not only bolsters performance via optimized hardware utilization but also paves the way for potential cost efficiencies for our community."
— Gaurav Seth, partner director of product, .NET Team, Microsoft
“We look forward to continued collaboration, working closely with Intel to optimize our AI models and exploring other data types and Intel Deep Learning Boost.”
— Bado Lee, optical character recognition (OCR) leader, Naver Corporation
Netflix* used Intel® oneAPI Deep Neural Network Library (oneDNN) to reduce latency on their FFmpeg*-based filter, which runs with other video transformations, like pixel format conversions. They also used Intel® VTune™ Profiler to uncover performance issues caused by the migration of workloads to a larger cloud instance, resulting in 3.5x performance improvement. To learn more, see:
For Your Eyes Only: Improving Netflix Video Quality with Neural Networks
Seeing through Hardware Counters: A Journey to a Threefold Performance Increase
"These breakthrough results make CPUs the best option for running transformers. Customers with performance-sensitive AI applications can use the combination of Numenta* and 4th gen Intel Xeon Scalable processors to deploy their real-time applications in a lightweight, cost-effective manner."
— Subutai Ahmad, CEO, Numenta
"PaddlePaddle* is the first AI deep learning framework in China to integrate with the traditional molecular dynamics' software LAMMPS and AI-based potential function software DeePMD kit. Based on Intel® Xeon® [processors] and oneAPI technology with oneMKL and oneDNN, the breakthrough progress in the whole process from training to inference has been realized, and the performance has reached the same level of a fellow deep learning framework, enabling the design and development with AI applied to materials science."
— Zhao Qiao, PaddlePaddle product leader, Baidu*
"Prediction Guard has experienced a remarkable transformation in efficiency thanks to our partnership with Intel and the utilization of their cutting-edge Gaudi2 architectures. In particular, our throughput has doubled in certain LLM-based information extraction use cases, delivering unparalleled results for our esteemed clientele. Intel® Developer Cloud has been instrumental in simplifying both our testing and production processes for our innovative generative AI platform. Our direct collaboration with Intel's software engineers and tools like Intel for Optimum (Hugging Face) and Gaudi2 architectures has been invaluable in optimizing our model deployments. We proudly stand alongside Intel as we continue to push boundaries and revolutionize the world of AI."
—Daniel Whitenack, CEO, Prediction Guard
"The PyTorch Foundation is thrilled to welcome Intel as a premier member, marking a significant milestone in our mission to empower the global AI community. Intel's extensive expertise and commitment to advancing cutting-edge technologies align perfectly with our vision of fostering open source innovation. Together, we will accelerate the development and democratization of PyTorch, and use the collaboration to shape a vibrant future of AI for all."
— Ibrahim Haddad, executive director, PyTorch Foundation
Quanta Cloud Technology (QCT)* DevCloud migrates from being an enterprise on-premises cloud solution provider to an OpenLab concept in 2022 after demonstrating the capability to fine-tune performance-optimized results for several HPC workloads such as NWP and molecular dynamics using the Intel® oneAPI Base & HPC toolkits. The OpenLab project phase will focus on validating heavier HPC and AI workloads such as OpenFOAM, VASP, AI Reference Kits from Intel, etc. for organizations like government entities and academic science research centers. With Intel® oneAPI Base, HPC & AI Toolkits, QCT DevCloud users can profile and optimize their code to its fullest potential on cross-architecture converged HPC and AI platforms. oneAPI not only helps developers to increase performance and productivity but also lowers their development costs by facilitating code reuse and reducing time spent reprogramming.
"We believe the future of AI is open, it is hybrid and it will extend to the edge. Red Hat and Intel are committed to giving AI developers what they need to prepare for this future. We worked with Intel to help them create the Intel AI Developer program to give developers learning materials and experience with Red Hat OpenShift Data Science and Intel’s AI software suite to accelerate the building and deploying of intelligent applications to edge environments."
— Steven Huels, senior director, AI Services, Red Hat
The suite of tools available in Intel toolkits has become an integral part of software development process at SankhyaSutra Labs. From developing optimized products using the Intel® C++ Compiler, Intel® Math Kernel Library, Intel® oneAPI Deep Neural Network Library (oneDNN), and identifying performance gaps using APS to leveraging the DPC++ programming model for heterogeneous HPC systems, the entire workflow is available in one place as part of the Intel toolkits. This has eased development efforts and allowed more time to focus on the business case of providing fast and scalable engineering simulation software.
"Intel oneAPI has helped us integrate our HPC software development, profiling and deployment into a seamless workflow."
— Soumyadeep Bhattacharya
"The Intel toolkits provide a tremendous boost to simulation software development workflows with increased ease of access to Intel's high performance optimizations in Intel® C++ Compiler, Intel® MPI Library, and profiling tools; introduction of DPC++ for programming on heterogeneous systems with GPUs and FPGAs; and visualization of large data sets using optimized rendering libraries."
"This strategic collaboration with Intel allows Seekr* to build foundation models at the best price and performance using a supercomputer of thousands of the latest Intel® Gaudi® processors, all backed by high-bandwidth interconnectivity. Seekr's trustworthy AI products combined with the ‘AI first’ Intel® Tiber™ Developer Cloud reduces errors and bias, so organizations of all sizes can access reliable LLMs and foundation models to unlock productivity and fuel innovation, running on trusted hardware."
— Rob Clark, president and chief technology officer, Seekr
"The Seekr and Intel collaboration unlocks the capabilities of running AI on hybrid computing. Intel GPUs and Intel Gaudi 2 processors are leveraged to train and serve at scale, large- and medium-size transformers. We saw improved performance for model training and inference compared to other chips in the market. Specifically on LLM inference, the large memory capacity of the HPUs and Intel GPUs has allowed us to adjust workload-specific parameters, such as batch sizes. On the software side, the various Intel extensions enabled us to move our ML stack to Intel hardware seamlessly."
—Stefanos Poulis, chief of AI research and development, Seekr
"Intel's Neural-Chat-7B has set a new benchmark on Vectara's hallucination leaderboard, as evaluated using Vectara's open source Hughes Hallucination Evaluation Model (HHEM)—the number one hallucination detection model on Hugging Face* with over 120k downloads. This result underscores three industry trends that are crucial for broader enterprise adoption of generative AI: Firstly, as research in the field has intensified, factual consistency of LLMs has shown steady improvement. Secondly, that the improvement is not limited to proprietary models with API-only access, but also to models that can be freely downloaded and used by the community. Third, smaller, easier-to-deploy models in the 7B and 13B sizes are quite capable, with the right training methods, of achieving strong results on factual consistency. HHEM, developed by Vectara and available to the community, is being increasingly adopted by leading research groups at companies like Intel as the tool of choice for quantifying hallucinations in LLMs."
— Amin Ahmad, CTO and founder, Vectara*
"Intel toolkits have become an integral part of our software development process at YUAN High-Tech. We developed optimized video processing platform using the Intel® Core™ processors and the OpenVINO toolkit. After optimization, most of the AI algorithms achieved a performance improvement of around 4-5x. It can help partners develop innovated smart video solutions and provide greater insights from video data."
— HP Lin, general manager, YUAN High-Tech, Taiwan
"Video analytics workloads use oneAPI-powered components such as Intel® Video Processing Library (Intel® VPL), Intel® oneAPI Data Analytics Library (oneDAL), Python and PyTorch distributions, and Intel® Extension for Scikit-learn* to address video quality challenges in our recordings and analytics. These components work cohesively, reliably, and with greater efficiency than disparate open sources. Despite the common perception that integrating multiple components can degrade performance, oneAPI breaks this mold by enhancing performance."
— Raja Gopal Hari Vijay, leadership staff member, Zoho Corp*
More Resources
AI Machine Learning Portfolio
Explore all Intel® AI content for developers.
AI Tools
Accelerate end-to-end machine learning and data science pipelines with optimized deep learning frameworks and high-performing Python* libraries.
Intel® AI Hardware
The Intel portfolio for AI hardware covers everything from data science workstations to data preprocessing, machine learning and deep learning modeling, and deployment in the data center and at the intelligent edge.
AI & Machine-Learning Forums
Footnotes & Disclaimers
†Intel does not control or audit third-party data. You should consult other sources to evaluate accuracy.
‡Case Study: Ningbo Konfoong Bioinformation Technology (KFBIO) Accelerates M. Tuberculosis Detection with Intel® AI