Skip to main content

Posts

Showing posts from 2025

AI Agents: Human Synthesis vs Human Substitutes

Human synthesis vs human substitute , this appears to be the two core perspectives for accessing AI when it comes to agentic execution.  From a human synthesis perspective, you observe that for the first time we have broad computational access to what were previously uniquely human capabilities. At the moment this can be enumerated as the ability to speak , see , hear and understand information . When hominoid robotics are mature enough, we can add the ability to use limbs to this synthesis. Prior to the emergence of LLMs, if you had a process that required any one of these capabilities, you needed to plug a human into the loop. It didn't matter whether the human had any particular expertise or not, if you needed to know what's in a picture or what someone said, you needed a person integrated into your process.  LLM/AI have changed this, we now have computational access to what I would describe as human synthesis . This is different from human substitute , which is what ...

Where do we go from here? Some thoughts and speculation.

A lot of technologist are rightfully fretting about what the future holds for tech careers, especially in software developer roles. Perhaps it is time to think less about what tab-tab-go programming would mean for the future of developer roles and rather how those existing skill-sets could be leveraged in an AI world. There is tremendous potential in reorienting technologists from a focus on churning out the next app from an IDE and towards thinking in a more holistic manner about how to leverage what already has been built out both in terms of software and infrastructure. The past 30 yrs or so of the tech industry has been a ginormous build-out of technological capability. We in the industry may not have seen it that way since we have been the ones engaged in the build-out process. In other words we have seen the build-out primarily as just our jobs, and less as a process perhaps with a terminal date. I wouldn't go so far as saying the build-out is complete by any means, but it s...

Intelligent Workspace: Managing your AWS Cloud Console via AI

Continuing in our series on the "Intelligent Workspace" as an alternative the the chatbot form factor, we have added another demo showcasing the versatility of the environment.

AI Assistant For Biomedical Research

In a previous demo we showcased the idea of the intelligent workspace and its potential use in fields of scientific research.  Below we showcase integration with an actual biomedical AI assistant agent for the Biomni platform.    

Supporting Drug Discovery Research Via AI Generated UI

Previously we showed Just-In-Time UI generation for an expense reporting app. Today we show the same capabilities being used to support a drug discovery project. In the demo below, we showcase the tight AI integration of the Solvent-Botworx platform and how one might use such an environment to support the drug discovery process.    

Human + Bot Collaboration via Automated UI Generation Part 2

Today we demo a simple expense reporting application that is generated on the fly by AI and used by a human...call it Just-In-Time (JIT) app creation.    

Human + Bot Collaboration via Automated UI Generation

We are releasing some updates to Botworx showcasing context extraction from websites and UI generation to facilitate Human + Bot collaboration.    

Introducing Solvent-Botworx Mobile

Today we are excited to release Solvent-Botworx Mobile, the full Botworx platform via a mobile form factor. You can now take all the power of the Botworx platform on the go.    

Meet Nanny & Watchman, AI security guard and playground monitor

Previously we showed Solvent-Botworx automations making use of human-in-loop capability via AI assistants Billie and Vernon .  Today we showcase observation use cases with two new demo assistants, Watchman and Nanny.      

Meet Billie & Vernon, your AI workspace side-kicks

We previously released our multi-modal AI assisted work space capabilities. Today we release some additional updates showcasing human-in-loop integration.   Vernon     Billie     Combined Demo    

AI/LLM CodeGen Tooling via Dependency Injection

AI tools are the tentacles exposed to AI models in order to allow them to access and drive external functionality. The primary form this takes for LLMs is function calling. We have also seen so-called "computer use" and web browsing designated in similar fashion. For code generation, we could also consider the ability to integrate dependencies in the form of external packages as a form of tooling. In other words anything that extends the capabilities of the AI/LLM model could be thought of as a tool. We can then extend this notion into the runtime environment for the code that LLMs generate by generating code that depends on runtime dependency injection. Late Binding Dependency injection is accomplished via "late binding", meaning the actual implementation of a given functionality isn't specified at the time code is written, until at some later point before execution of logic. Late binding can be implemented at the language level, such as in C++ you have virtual...

Towards Multi-Modal AI Assisted Workspaces

We have released updates to the Solvent-Botworx platform that includes the introduction of automation programs and the addition of multi-modal capabilities. Multi-modal speech capabilities in particular are an intriguing development for integration into "deep-work" work-spaces.  One could imagine a future where anyone doing heavy-duty cognitive work  will be working within a software workspace with seamless multi-modal AI integrations. With the right type of integrations, multi-modal capabilities offer the possibility of true AI aided assistant capabilities. Below are demo videos. Speech modality demo:   Image and speech modality demo:  

Applying fork/join model and git-style cherry picking to AI conversations

In this post we introduce using the fork/join concurrency pattern combined with the git-style cherry picking to manage and use LLM conversations.   These are loose conceptual connections, so please put down the pitchfork (pun intended). Just as in git you can cherry-pick a commit from one branch to create a new head on another branch, you can pick conversations across sessions (including between different models) to build context to drive LLM conversations. You can also create a fork of a session, this creates a parallel set of messages that are detached from, ie forked off the main session within which they run. You can create new messages that join an existing fork. This is a nice capability, the degree to which it will prove useful is yet to be determined. One of the possibilities is giving the user an ability to create long term "memory" from AI conversations and being able to easily reuse that memory (set of messages) without the need to rebuild context. The ability to...