
Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More
Apple has made a significant stride in its efforts to empower developers with cutting-edge on-device AI capabilities. The tech giant recently released 20 new Core ML models and 4 datasets on Hugging Face, a leading community platform for sharing AI models and code. This move underscores Apple’s commitment to advancing AI while prioritizing user privacy and efficiency.
Clement Delangue, cofounder and CEO of Hugging Face, highlighted the significance of this update in a statement sent to VentureBeat. “This is a major update by uploading many models to their Hugging Face repo with their Core ML framework,” Delangue said. “The update includes exciting new models focused on text and images, such as image classification or depth segmentation. Imagine an app that can effortlessly remove unwanted backgrounds from photos or instantly identify objects in front of you and provide their names in a foreign language.”
Optimized models for enhanced performance and privacy
The newly released Core ML models encompass a wide range of applications, including FastViT for image classification, DepthAnything for monocular depth estimation, and DETR for semantic segmentation. These models have been optimized to run exclusively on users’ devices, eliminating the need for a network connection. This approach not only enhances app performance but also ensures that user data remains secure and private.
Delangue emphasized the importance of on-device AI, stating, “Core ML models run strictly on the user’s device and remove any need for a network connection. This keeps your app lightning-fast and ensures user data remains private.”
Collaboration with Hugging Face fuels AI innovation
The release of these models and datasets on Hugging Face is a testament to Apple’s growing partnership with the AI community platform. In recent months, Apple has been actively collaborating with Hugging Face to power various initiatives, such as the MLX Community and the integration of open-source AI into Apple Intelligence features.
Industry experts believe that Apple’s focus on on-device AI aligns with the broader trend of shifting computational power from the cloud to edge devices. By leveraging the capabilities of Apple Silicon and minimizing memory footprint and power consumption, Core ML enables developers to create intelligent apps that deliver seamless user experiences without compromising privacy or performance.
Empowering developers to build privacy-focused intelligent apps
As the demand for privacy-preserving and efficient AI solutions continues to rise, Apple’s latest move is expected to empower developers to build innovative applications across various domains, from image and video processing to natural language understanding and beyond. With the availability of these new Core ML models and datasets on Hugging Face, the AI community can further collaborate, iterate, and push the boundaries of what’s possible with on-device AI.
Apple’s commitment to advancing AI while prioritizing user privacy sets a strong precedent for the industry. As more tech giants recognize the importance of on-device AI, it is likely that we will see a surge in the development of intelligent, privacy-focused applications that harness the power of local, specialized models to deliver transformative user experiences.