#Deep LearnIng Inference Application Meta
#Convolutional Neural Network
#Artificial Intelligence
#Machine Learning
#Artificial Neural Network
#Image Analysis
#Shift Invariant Neural Network
#Shared Weight Architecture
#Convolutional Kernel
#Filters
#Feature Map
#Image Recognition
#Video Recognition
#Remommender System
#Image Classification
#Image Segmentation
#Media Image Analysis
#Natural Language Processing
#Brain Computer Interface
#Financial Time Series
#Multilayer Perceptron
#Fully Connected Networks
#One Neuron Connected To All Layers In The Next Layer
#Over Fitting Data
#Penalizing Parameters
#Weight Decay
#Skipped Connections
#Hierarchical Pattern Of Data
#Biological Process
#Animal Visual Cortex
#Cortical Neuron
#Receptive Field
#Perceptual AI
#Edge AI
#Token
#Fine-tuning
#AI model
#Tokenization
#Speech to text
#Text classification
#Sentiment
#Semantic similarity
#Semantic search
#Part of Speech tagging
#Named Entity Recognition
#Intent classification | Intent detection | Intent recognition
#Summarization
#Code Generation
#Training Convolutional Neural Networks (CNN)
#Error surface learning
#Gradient-based learning
#Hyperparameters
#Loss Functions
#Text-to-image diffusion model
#Idiosyncratic prompt
#Prompt alignment
#Direct reward fine-tuning (DRaFT)
#Differentiable reward function
#Complex prompt
#DRaFT method
#DRaFT+ algorithm
#Custom generative AI
#Training
#Layer and tensor fusion
#Retrieval-augmented generation
#Guardrailing
#Data curation
#Pretrained model
#Reinforcement learning from human feedback (RLHF)
#Large language model (LLM)
#Generative text-to-image
#Reinforcement learning (RL)
#Prompt domain
#Backpropagating differentiable reward through diffusion process
#Over-optimization
#Mode collapse
#Script
#Deep learning algorithm
#Model alignment
#Workflows for GenAI models
#Deep generative learning
#Weakly supervised learning
#Neural network
#Prompt engineering
#Quantization
#Vision-Language Model (VLM)
#Deep neural network
#Vectorized neural network
#Deep-learning framework
#Pre-training method
#Fine-tuning method
#Fine-tuning 2D model on 3D scans
#SLice Integration by Vision Transformer (SLIViT)
#Downstream learning
#4D deep learning model
#A-list celebrity home protector | Burglaries targeting high-end items | Burglary report on Lime Orchard Road | Burglar had smashed glass door of residence | Ransacked home and fled | Couple were not home at the time | Unknown whether any items were taken | Lime Orchard Road is within Hidden Valley gated community of Los Angeles in Beverly Hills | Penelope Cruz, Cameron Diaz, Jennifer Lawrence, Adele and Katy Perry have purchased homes there, in addition to Kidman and Urban | Kidman and Urban bought their home for $4.7 million in 2008 | 4,100-square-foot, five-bedroom home built in 1965 and sits on 1ΒΌ-acre lot | Property large windows have views of the canyons | Theirs is one of several celebrity properties burglarized in Los Angeles and across country recently | Connected to South American organized-theft rings
#Professional athlete home protector | South American crime rings | Targeting wealthy Southern California neighborhoods for sophisticated home burglaries | Behind burglaries at homes of professional athletes and celebrities | Theft groups conduct extensive research before plotting burglaries | Monitoring target whereabouts and weekly routines via social media | Tracking travel and schedules | Conducting physical surveillance at homes | Attacks staged while targets and their families are away | Robbers aware of where valuables are stored in homes prior to staging break-ins | Burglaries conducted in short amount of time | Bypass alarm systems | Use Wi-Fi jammers to block Wi-Fi connections | Disable devices | Cover security cameras | Obfuscate identities
#27B parameter model | Google Gemma 2 | High-performing lightweight language model | Designed for efficiency and versatility | Part of Gemma family | Available in three sizes: 2B, 9B, and 27B parameters | 27B variant trained on 13 trillion tokens (web documents, code, and mathematics | Excels in text generation, summarization, reasoning
#ROS 2 | The second version of the Robot Operating System | Communication, compatibility with other operating systems | Authentication and encryption mechanisms | Works natively on Linux, Windows, and macOS | Fast RTPS based on DDS (Data Distribution Service) | Programming languages: C++, Python, Rust
#Agentic AI | Artificial intelligence systems with a degree of autonomy, enabling them to make decisions, take actions, and learn from experiences to achieve specific goals, often with minimal human intervention | Agentic AI systems are designed to operate independently, unlike traditional AI models that rely on predefined instructions or prompts | Reinforcement learning (RL) | Deep neural network (DNN) | Multi-agent system (MAS) | Goal-setting algorithm | Adaptive learning algorithm | Agentic agents focus on autonomy and real-time decision-making in complex scenarios | Ability to determine intent and outcome of processes | Planning and adapting to changes | Ability to self-refine and update instructions without outside intervention | Full autonomy requires creativity and ability to anticipate changing needs before they occur proactively | Agentic AI benefits Industry 4.0 facilities monitoring machinery in real time, predicting failures, scheduling maintenance, reducing downtime, and optimizing asset availability, enabling continuous process optimization, minimizing waste, and enhancing operational efficiency
#Large Language Model (LLM) | Foundational LLM: ex Wikipedia in all its languages fed to LLM one word at a time | LLM is trained to predict the next word most likely to appear in that context | LLM intellugence is based on its ability to predict what comes next in a sentence | LLMs are amazing artifacts, containing a model of all of language, on a scale no human could conceive or visualize | LLMs do not apply any value to information, or truthfulness of sentences and paragraphs they have learned to produce | LLMs are powerful pattern-matching machines but lack human-like understanding, common sense, or ethical reasoning | LLMs produce merely a statistically probable sequence of words based on their training | LLMs are very good at summarizing | Inappropriate use of LLMs as search engines has produced lots of unhappy results | LLM output follows path of most likely words and assembles them into sentences | Pathological liars as a source for information | Incredibly good at turning pre-existing information into words | Give them facts and let them explain or impart them
#Retrieval Augmented Generation. (RAG LLM) | Designed for answering queries in a specific subject, for example, how to operate a particular appliance, tool, or type of machinery | LLM takes as much textual information about subject, user manuals and then pre-process it into small chunks containing few specific facts | When user asks question, software system identifies chunk of text which is most likely to contain answer | Question and answer are then fed to LLM, which generates human-language answer in response to query | Enforcing factualness on LLMs
#Vision-language model (VLM) | Training vision models when labeled data unavailable | Techniques enabling robots to determine appropriate actions in novel situations | LLMs used as visual reasoning coordinators | Using multiple task-specific models