- xLM's ContinuousTV Weekly Newsletter
- Posts
- #006: Validation Action Model (VAM) - An intelligent continuous validation agent
#006: Validation Action Model (VAM) - An intelligent continuous validation agent
#006: Validation Action Model (VAM) - An intelligent continuous validation agent
Continuous Validation Managed Service (CVMS) - Quick Background
xLM has been an industry leader providing managed GxP Continuous Validation Managed Service (CVMS) since 2016. We have developed CVMS for a number of life science clients and technology partners.
Our CVMS is a complete framework that lets our technical team to take any software validation project (Cloud or On-Prem) from user requirements to automated software validation in a matter of days. It also allows us to maintain any target app in a "validated state" through its life cycle of patches, releases and changes by executing 100% regression suites in minutes or hours.
Our technical stack includes xLM BDD (Behavioral Driven Development) Framework, xLM Reporting Services and Azure DevOps. All code is written in C#.
Click here to get a quick overview of our tech stack including demos.
Since 2023 our R&D team at Continuous Labs is working on our next gen CVMS. This next gen framework will be based on Large Action Model (LAM) technology. This new framework will enable us to deploy intelligent validation agents based on our Validation Action Model (VAM).
A VAM agent will be able to explore any software app on its own and understand the software functionality. Based on its understanding, the agent will layout the validation strategy including test cases. The fun continues from here! Once the validation strategy is approved, the agent can author the test cases, get it approved by a human in the loop and execute the test cases. The results are passed on to the reporting agent which will generate a PDF that will consist of TPEs (Test Protocol Executed). These TPEs are then approved by QA Humans in the Loop to deploy intelligent validation agents based on our Validation Action Model (VAM).
Project Manava: VAM Model
VAM agents are based on Large Action Model (LAM) which is turn leverages LLMs.
Large Action Model (LAM) - Quick Review
Large Action Models (LAMs) are a new class of AI systems that go beyond the capabilities of traditional large language models (LLMs) like GPT-4. While LLMs excel at understanding and generating human-like text, LAMs are designed to translate that understanding into concrete actions.
Action Representation: LAMs employ a formal representation of actions using a combination of high-level symbolic representations and low-level neural network-based representations. This allows for flexibility and expressiveness in representing a wide range of actions, from simple tasks to complex workflows.
Action Hierarchy: LAMs utilize a hierarchical structure to represent actions. Actions are organized into a tree-like structure, where higher-level actions encapsulate lower-level actions. This hierarchical organization enables efficient planning and execution of complex actions.
Planning Engine: LAMs incorporate a powerful planning engine that generates action sequences to achieve desired goals. The planning engine considers the current state, available actions, and the goal to create a plan that maximizes the chances of success. This allows LAMs to reason about the steps required to complete a task, rather than just executing a predefined sequence.
Execution Module: LAMs' execution module is responsible for executing the generated action sequences. It coordinates the execution of sub-actions, ensuring that the actions are performed in the correct order and with the necessary coordination. This allows LAMs to seamlessly interact with various applications and systems to complete complex tasks.
Learning and Adaptation: LAMs can learn and adapt over time. They can refine their understanding of actions, optimize their planning and execution, and improve their performance through continuous interaction and feedback. This enables LAMs to become more efficient and effective at completing tasks as they gain more experience.
Neuro-Symbolic Integration: LAMs combine the pattern recognition capabilities of neural networks with the reasoning and abstraction abilities of symbolic AI. This neuro-symbolic integration enables LAMs to interpret abstract concepts, perform logical operations, and reason about complex tasks in a more sophisticated manner.
The technical capabilities of LAMs represent a significant advancement in the field of artificial intelligence, bridging the gap between language understanding and task execution. By combining powerful planning, reasoning, and learning abilities, LAMs have the potential to revolutionize the field of software validation.
LAM - A commercial example
The Rabbit R1 is a groundbreaking AI-powered device developed by Rabbit Inc. that represents a significant advancement in personal computing and human-machine interaction.
Compact, square design with a 2.88-inch color touchscreen, push-to-talk button, and rotating camera
Runs on Rabbit OS, which utilizes a Large Action Model (LAM)
LAM allows the Rabbit R1 to perform a wide range of tasks like booking a cab, ordering food, sending emails, and even planning complex trips with multiple bookings
The device can learn new skills through a single interaction, adapting to the user's preferences and habits over time
Conclusion
The current AI revolution will change Computer Validation forever. Our project Manava aims to bring LAM technology to GxP Software Validation. Our VAM agents will be able to test software effortless with self healing capabilities. The test coverage will 100x better than current manual validation.
With this new found technology, we will be able to truly bring effortless validation to our customer projects.
Watch this edition on YouTube
Listen to this edition on Spotify
Current Happenings in AI
Current AI Applications in Lifesciences
What questions do you have about artificial intelligence in Life sciences? No question is too big or too small.
Reply