Skip to main content

AI Agents: Human Synthesis vs Human Substitutes

Human synthesis vs human substitute, this appears to be the two core perspectives for accessing GenAI when it comes to Agentic execution. 

From a human synthesis perspective, you observe that for the first time we have broad computational access to what were previously uniquely human capabilities. At the moment this can be enumerated as the ability to speak, see, hear and understand information. When hominoid robotics are mature enough, we can add the ability to use limbs to this synthesis.

Prior to the emergence of LLMs, if you had a process that required any one of these capabilities, you needed to plug a human into the loop. It didn't matter whether the human had any particular expertise or not, if you needed to know what's in a picture or what someone said, you needed a person integrated into your process. 

LLM/GenAI have changed this, we now have computational access to what I would describe as human synthesis. This is different from human substitute, which is what folks pursuing AGI/Super-intelligence I presume are after.

I think this distinction needs to be made more salient because at the moment a lot of the conversation around AI capabilities tend to present them in terms of human substitutes as opposed to human synthesis.

The distinction is also very helpful to those building AI solutions. I think it is an effective way for builders to constraint themselves and gain clarity on what the nature of the AI capability we currently have access to is, and how to effectively leverage it.
 

Human synthesis means these capabilities are ultimately disjointed, meaning you can't treat current AI agents as human brains inside a computer as many are currently attempting to do. 

Instead you think of AI agents as providing access to those previously exclusive human abilities and you can now use them within computerized processes in a piecemeal manner without requiring a human being in the loop....A sort of synthetic human if you will.


Let's consider one common example for autonomous agent use cases, the Insurance Claims Adjuster.

It is a popular use case example for AI agent solutions. I don't work in insurance claims processing but I suspect a large fraction of the non-human part of the process can be satisfied with conventional automation. 

Two places where I can think of injecting AI capabilities, analyzing the claimants claim, ie their account of what happened, which is likely to be in a free form informational format, ie text, audio, video.

The second place being analyzing evidence, which would likely be media, though it could also be corroborating statements from a trustworthy entity. 

Both these cases fall under the human synthesis characterization, ie they are requirements that previously would have required a human in the loop because the ability to understand information, to see an image or video, to hear audio and understand it all were strictly human, with GenAI that's no longer necessarily the case.

It is however important to clearly demarcate and understand where AI fits in this solution, instead of the Agentic sales pitch which would rather talk about an Insurance Claims Adjuster as if she is a lady living inside the computer that you can now leverage as you please. 

Human synthesis in other words does not equal Human substitute.

Comments

Popular posts from this blog

Declarative Programming With AI/LLMs

  What Is Declarative Programming Broadly speaking, there are two ways to program/instruct a computer to perform a task, they are imperative vs declarative programming. Imperative programming is what we do the most, we write all the code necessary for the computer to perform a task such that the only thing left for the computer to do is fetch and execute CPU instructions. If you are using Java,C#, Javascript...etc you are doing imperative programming. Declarative programming is a higher-order form of programming, we instruct the computer to perform a task but otherwise let it "figure out" how to do it. Declarative programming requires some sort of software based execution engine. Whereas with imperative programming our code gets compiled into some machine/byte code and then run by the CPU, declarative programming requires a layer of software that does the "magic" that allows you to use it without having to write the precise logic for completing tasks.  I would guess...

Building a Youtube video portal with AI

  In Solvent we are focused on effectively combining the following modes for authoring web apps: Code-Centric Lowcode/No code AI CodeGen/Assist In this post we are going to showcase app authoring degrees-of-freedom as a requirement for fully realizing the benefits of combining these modes/perspectives. Demo: building a video portal Our prompts : Video walk-through Part 1:   Video walk-through Part 2:   Video portal live : Built with combined modes of Lowcode/No code, Code-Centric, AI CodeGen/Assist.  

Meet Nanny & Watchman, AI security guard and playground monitor

Previously we showed Solvent-Botworx automations making use of human-in-loop capability via AI assistants Billie and Vernon .  Today we showcase observation use cases with two new demo assistants, Watchman and Nanny.