While embedded processor vendors usually focus on the deployment side of machine learning (ML) designs, NXP has taken the extra step of offering tools for data preparation and model training. NXP, mainly interested in enabling the processing platforms and end applications these processing platforms facilitate, offers a software development environment that provides a collection of workflow tools, inference engines, neural network (NN) compilers, and optimized libraries for building ML applications using NXP microcontrollers and applications processors.
The inference engines supported by the eIQ ML development environment include Arm NN, Glow, ONNX, TensorFlow Lite, and DeepViewRT that serve artificial intelligence (AI) applications ranging from anomaly detection to voice recognition to object classification. Moreover, the eIQ ML software can be leveraged as part of a user’s existing flow or used for the complete flow depending on the targeted ML application.
For classification and detection of vision-based models, which currently make up 60% to 70% of machine learning applications, the eIQ toolset offers a set of base models as a quick starting point for users. Once embedded developers have finalized the training, by using the eIQ ML software, they can analyze the overall model and determine how much bandwidth and memory is spent.
Key toolset features
The eIQ ML development environment also includes various application examples that demonstrate how to integrate neural networks into voice, vision, and sensor applications. The eIQ Toolkit in this development environment enables graphic profiling capabilities with runtime insights to optimize neural network architectures.
Next, the eIQ Portal, an intuitive graphical user interface (GUI), enables users to create, optimize, debug, convert and export ML models. It also allows embedded developers to import datasets and models from TensorFlow and ONNX formats, and then rapidly train and deploy neural network models and ML workloads.
Finally, the eIQ Marketplace offers value-add solutions, professional support, and design services from trusted ecosystem partners. The design services, libraries and models are hosted through the eIQ Marketplace to allow a faster time to market. For instance, the Calgary, Canada-based Au-Zone’s DeepView ML Tool Suite will augment eIQ with an intuitive workflow and enable developers to rapidly train and deploy NN models and ML workloads across NXP processors.
BYOD and BYOM flows
The eIQ ML development environment hosts two types of flows: bring your own data (BYOD) and bring your own model (BYOM).
For BYOD, the data curation aspect happens within the tool, so embedded developers can add labels to data, identify the region of interest, and select how much of dataset they want to hold for validation. They can also use the dataset augmentation feature, which provides a set of filters and modifications to the existing data. For example, while you may have hundreds of images of flowers, the feature will set the number of images that you have to train to create a robust model.
Regarding BYOM, if embedded developers have a model outside the eIQ toolset and want to deploy it to the NXP processing platforms, the toolset has conversion models. The conversion models take it through different inference engines and check which inference engine gives the best results. It’s important to note that, even with a converted model, you may find additional data collected in the field that might be useful in the deployment. Here, the toolset allows developers to feed that data back and improve the model over time.
It’s also worth mentioning that users can build models and target them to run on a CPU, DSP, or GPU according to the required performance profile. The eIQ ML development environment ensures that all compute engines are supported via different inference engines.
Majeed Ahmad, Editor-in-Chief of EDN and Planet Analog, has covered the electronics design industry for more than two decades.
Small Bottom Ad