Skip to main content

Applying fork/join model and git-style cherry picking to AI conversations

In this post we introduce using the fork/join concurrency pattern combined with the git-style cherry picking to manage and use LLM conversations.  

These are loose conceptual connections, so please put down the pitchfork (pun intended).

Just as in git you can cherry-pick a commit from one branch to create a new head on another branch, you can pick conversations across sessions (including between different models) to build context to drive LLM conversations.

You can also create a fork of a session, this creates a parallel set of messages that are detached from, ie forked off the main session within which they run.

You can create new messages that join an existing fork.

This is a nice capability, the degree to which it will prove useful is yet to be determined. One of the possibilities is giving the user an ability to create long term "memory" from AI conversations and being able to easily reuse that memory (set of messages) without the need to rebuild context.

The ability to label forks is especially helpful in letting user know what the fork represents and thus offers the potential of greater utility. Below is a demo of this feature:

 


 

Comments

Popular posts from this blog

Meet Nanny & Watchman, AI security guard and playground monitor

Previously we showed Solvent-Botworx automations making use of human-in-loop capability via AI assistants Billie and Vernon .  Today we showcase observation use cases with two new demo assistants, Watchman and Nanny.      

Towards Multi-Modal AI Assisted Workspaces

We have released updates to the Solvent-Botworx platform that includes the introduction of automation programs and the addition of multi-modal capabilities. Multi-modal speech capabilities in particular are an intriguing development for integration into "deep-work" work-spaces.  One could imagine a future where anyone doing heavy-duty cognitive work  will be working within a software workspace with seamless multi-modal AI integrations. With the right type of integrations, multi-modal capabilities offer the possibility of true AI aided assistant capabilities. Below are demo videos. Speech modality demo:   Image and speech modality demo:  

Meet Billie & Vernon, your AI workspace side-kicks

We previously released our multi-modal AI assisted work space capabilities. Today we release some additional updates showcasing human-in-loop integration.   Vernon     Billie     Combined Demo