Skip to main content

Using AI (OpenAI, Anthropic,Gemini) Function Calling To Automate Dev Environment Actions

 

In this post we showcase direct AI integration support, then follow that up with using that integration to automate the process of issuing a prompt and generating a Lowcode app. 

The integration approach shown allows a developer  to build all sorts of automation "recipes" to drive the developer environment.

Part 1 - Basic AI Integration Demo (OpenAI, Anthropic, Gemini)



Part 2 -CodeGen and Lowcode Integration via automation using all three models.








Comments

Popular posts from this blog

Meet Nanny & Watchman, AI security guard and playground monitor

Previously we showed Solvent-Botworx automations making use of human-in-loop capability via AI assistants Billie and Vernon .  Today we showcase observation use cases with two new demo assistants, Watchman and Nanny.      

Towards Multi-Modal AI Assisted Workspaces

We have released updates to the Solvent-Botworx platform that includes the introduction of automation programs and the addition of multi-modal capabilities. Multi-modal speech capabilities in particular are an intriguing development for integration into "deep-work" work-spaces.  One could imagine a future where anyone doing heavy-duty cognitive work  will be working within a software workspace with seamless multi-modal AI integrations. With the right type of integrations, multi-modal capabilities offer the possibility of true AI aided assistant capabilities. Below are demo videos. Speech modality demo:   Image and speech modality demo:  

Meet Billie & Vernon, your AI workspace side-kicks

We previously released our multi-modal AI assisted work space capabilities. Today we release some additional updates showcasing human-in-loop integration.   Vernon     Billie     Combined Demo