Skip to main content

AI/LLM CodeGen Tooling via Dependency Injection

AI tools are the tentacles exposed to AI models in order to allow them to access and drive external functionality. The primary form this takes for LLMs is function calling. We have also seen so-called "computer use" and web browsing designated in similar fashion.

For code generation, we could also consider the ability to integrate dependencies in the form of external packages as a form of tooling. In other words anything that extends the capabilities of the AI/LLM model could be thought of as a tool.

We can then extend this notion into the runtime environment for the code that LLMs generate by generating code that depends on runtime dependency injection.

Late Binding

Dependency injection is accomplished via "late binding", meaning the actual implementation
of a given functionality isn't specified at the time code is written, until at some later point before execution of logic.

Late binding can be implemented at the language level, such as in C++ you have virtual functions that
are eventually bound via dynamic-binding. In Java you have interfaces that can define the API and the actual implementation can be later supplied by some module/jar artifact.

Beyond language specific constructs, late binding can be accomplish in a broad set of ways. If a language supports passing functions as arguments, you can in effect bind functionality at the moment the invoked function that receives the function argument is called.

You can pass whole object instances representing implementations that have been instantiated
via some dynamic mechanism (ex Java beans, reflection,IoC frameworks (ex Spring),OSGi...etc).

There is really no shortage of ways one could implement late binding.

Late Binding Into AI CodeGen

Imagine you have a bunch of utilities that can perform various tasks, you are free to think of this in pretty much any way you want, ie API, SDK, CLI...etc.

To use such utilities in automation, you may wish to wrap additional logic around them in order to accomplish a particular task (ex: to satisfy a specific customer need). 

For instance you may have a utility that knows how to parse a given programming language and perhaps provides a way to iterate over the AST and extract information or perform some additional tasks.

You could provide an LLM with the necessary context and hope that it will figure out how to create such a utility by itself using whatever APIs are available, or you could (with the possible help of LLMs) create such utilities in a manner that makes them reusable and then use LLMs to drive reuse of such
utilities in an automated fashion.

The gist of the idea is that when you want to do small scale automation, it defeats the purpose if every time you wish to use AI to do it you have to embark on a journey that involves the usual challenges of working with LLMs in order to get the desired results.


By abstracting away the challenging tasks that LLMs are likely to struggle with into pre-built utilities, you can leave the LLMs to wrap mediocre logic/code around such utilities in order to drive just-in-time automation in a manner that is likely to be consistent,reliable and desirable.

The concrete way to accomplish this is to instruct the LLM to in essence assume the existence of your utilities (ex:as object instances), provide information about the interface they expose, provide an object that represents an implementation instance and tell the LLM to use that object to access the utility. The LLM need not worry about how the object will be instantiated necessarily, it perhaps just needs to know where/how (contextually) it could find this object.

There is nothing new about using dependency injection, we simply observe that if the functionality (ex: domain specific utilities) that are likely to trip LLMs can be abstracted away into pre-built utilities, it can make it very easy to call on AI/LLMs to drive the reuse of such utilities via CodeGen that relies on injected objects.

This approach is a kind of tool use, except it is targeted at the actual code the AI writes as opposed to something the model invokes directly.

From our experiments, this proves to be a nice way to drive small scale automation. We can ask the LLM to write code targeting our utility object and it will do it without having to worry about implementing the functionality of that utility every time we need some variation of automation.

For us this approach is primarily intended to be used to target the Solvent-Botworx environment itself, ie exposing platform capabilities as utilities that we then have AI use to write small programs around, the approach however can be implemented in any other software platform environment.

 Demo Video


 

Comments

Popular posts from this blog

Declarative Programming With AI/LLMs

  What Is Declarative Programming Broadly speaking, there are two ways to program/instruct a computer to perform a task, they are imperative vs declarative programming. Imperative programming is what we do the most, we write all the code necessary for the computer to perform a task such that the only thing left for the computer to do is fetch and execute CPU instructions. If you are using Java,C#, Javascript...etc you are doing imperative programming. Declarative programming is a higher-order form of programming, we instruct the computer to perform a task but otherwise let it "figure out" how to do it. Declarative programming requires some sort of software based execution engine. Whereas with imperative programming our code gets compiled into some machine/byte code and then run by the CPU, declarative programming requires a layer of software that does the "magic" that allows you to use it without having to write the precise logic for completing tasks.  I would guess...

Integrating AI CodeGen With Low Code Application Development

This is a demo showing Solvent-Lowcodr working with AI Code generation (ChatGPT) to build web apps in a low code fashion. This demo uses Vuetify Component Library. This is the first in a series showing various ways that AI CodeGen and low code can be combined in Solvent-Lowcodr to accelerate web application development. In subsequent posts we'll show how to use AI CodeGen to build low code building blocks in Solvent-Lowcodr. We'll also show demos for React.

Building a Youtube video portal with AI

  In Solvent we are focused on effectively combining the following modes for authoring web apps: Code-Centric Lowcode/No code AI CodeGen/Assist In this post we are going to showcase app authoring degrees-of-freedom as a requirement for fully realizing the benefits of combining these modes/perspectives. Demo: building a video portal Our prompts : Video walk-through Part 1:   Video walk-through Part 2:   Video portal live : Built with combined modes of Lowcode/No code, Code-Centric, AI CodeGen/Assist.