deeplearninginference.app


#Deep LearnIng Inference | Applications


#PEKAT VISION | Industrial visual inspection and quality assurance


#Synthesized | Labelled high-quality data generation for fraud detection


#UC Berkeley | Center for Targeted Machine Learning and Causal Inference


#DAWNBench | Deep Learning Benchmark


#MIT | Sensing, Learning & Inference Group - CSAIL


#Levatas | Builds end to end AI solutions, machine learning models, and human in the loop systems automating visual inspection | Teamed with Boston Dynamics


#Arm | Cortex X-3 | Focus on enabling artificial intelligence and machine learning-based apps


#3DFY.ai | 3DFY Prompt | Generative AI that lets developers and creators build 3D models based on text prompts


#Deutsche Bank | Artificial intelligence to scan wealthy client portfolios


#Morgan Stanley | Experimenting Artificial intelligence


#Google | TinyML | Applying artificial intelligence to edge devices | Machine learning framework running on low-power, resource-constraint, low-bandwidth edge devices |32-bit microcontrollers | Digital signal processors | TensorFlow Lite | TinyML applications of machine learning with tiny footprint of a few kilobytes within embedded platforms having ultra-low power consumption, high (internet) latency, limited RAM, and flash memory | Running on Android, iOS, Embedded Linux, and microcontrollers | Applying machine learning and deep learning models to embedded systems running on microcontrollers, digital signal processors, or other ultra-low-power specialized processors | Running for weeks, months, or even years without recharging or battery replacement | IoT devices | Running for 24×7 hours | Machine learning model executed within edge devices without any need of data communication over a network | Communicating only results of inferences to network | Deep learning framework using recurrent neural networks (RNN) for machine learning | Training for models is batch training in offline mode | Selection of dataset, normalization, underfitting or overfitting of the model, regularization, data augmentation, training, validation, and testing is already done with the help of a cloud platform like Google Colab | Once ported to embedded system, model has no more training. it consumes real-time data from sensors or input devices and applies model to that data | Arduino Nano 33 BLE Sense | SparkFun Edge | STM32F746 Discovery kit | Adafruit EdgeBadge | Adafruit TensorFlow Lite for Microcontrollers Kit | Adafruit Circuit Playground Bluefruit | Espressif ESP32-DevKitC | Espressif ESP-EYE | Wio Terminal: ATSAMD51 | Himax WE-I Plus EVB Endpoint AI Development Board | Synopsys DesignWare ARC EM Software Development Platform | Sony Spresense | Image classification | Object detection | Pose estimation | Speech recognition | Gesture recognition | Image segmentation | Text classification | On-device recommendation | Natural language question answering | Digit classifier | Style transfer | Smart reply | Super-resolution | Audio classification | Reinforcement learning | Optical character recognition | On-device training


#KoBold Metals | AI to mine rear earth metals


#DEEPX | Neural Processing Unit (NPU) for IoT devices | Deep learning accelerators | Software framework for development and deployment of deep learning models for mass market uses | Device AI integration into edge devices | Code generation for DEEPX NPU | SDK quantizer converts trained models from FP32 bit to INT8 or less bit integer representation | Optimizer fuses operators or exchanges the order of operators | Runtime API supports commands for model loading, inference execution, passing model inputs, receiving inference data, and a set of functions to manage the devices | Smart Camera Sensors | Machine Vision | Smart Mobility | Drone | Edge Computing | Smart Building | Smart Factory


#UnitX | Deep learning inspection system for automated manufacturing


#TSMC | AI chips | GPUs for Nvidia | Parallel computations to train AI models


#CoreWeave | GPU capacity via cloud | GPU accelerated compute to large scale users of artificial intelligence, machine learning, real time rendering visual effects, and for life sciences | Nvidia CUDA software programming platform ecosystem | Customers and developers building on that ecosystem with software tools and libraries, leveraging everyone else prior work | Nvidia InfiniBand | GPUs to serve inference, the process of generating answers from AI models


#Avnet | Battery powered sensing systems | Applications processors | GPU | NPU | Power consumption


#NVIDIA | Jetson Orin | Embedded AI platform | NVIDIA Ampere architecture GPU at its core | Deep Learning Accelerator (DLA) for deep learning workloads | Deep learning recommendation models | Programmable Vision Accelerator (PVA) engine for image processing and computer vision algorithms | Multi-Standard Video Encoder (NVENC) | Multi-Standard Video Decoder (NVDEC) | Deep learning operations like convolutions much more efficiently than CPU | Autonomous driving solution stack | Object detection as part of the perception stack | Proximity segmentation | Robotics Platform Software | DeepStream SDK


#AI21 Labs | Generative text AI | Large language models | Instruction models | Custom models | Multilingual support


#Edge AI and Vision Alliance | Edge AI | Visual AI | Perceptual AI


#NLP Cloud | AI models | Privacy focused | LLaMA 2 | AI models available at the edge and on premise | Fine tuning client AI with client data


#OpenAI | | GPT | ChatGPT


#Meta | LLaMA


#Stability AI | Stable Diffusion model


#Sensor Cortek | Training metrics: accuracy, precision, false positives, false negatives, F1 score | Analyzing confusion matrices | Identifying class interaction issues


#Groq | Language Processing Inference Engine | Language Processing Unit (LPU) | AI language applications (LLMs) | Sequences of text generated fast | Groq supports standard machine learning (ML) frameworks PyTorch, TensorFlow, and ONNX for inference | GroqCloud | Groq Compiler