Harness the power of Language Learning Models (LLM) to refine, enhance, and streamline your documentation. With just a few lines of code, integrate advanced LLM-powered tasks into your build and deployment pipeline.
Command-line tools and scripts are the go-to methods for integrating tools into our daily development workflows. From CI actions to VSCode companions, they provide a ubiquitous approach to enhancing our workflows.
Composable streamlines the creation of command-line scripts for LLM-powered tasks that can be deployed virtually anywhere.
We opted for documentation proofreading to demonstrate how effortlessly one can craft LLM-powered command-line tools. As you'll discover, it requires less than 10 minutes and a mere 20 lines of code!
Our latest example demonstrate the ease of leveraging an LLM to proofread documentation via Composable's Studio. The Interaction "Proofread Documentation," ingests a document (in MDX, Markdown or Text format) and promptly outputs a polished version. An added bonus? in addition of GPT-4, in can run on just about any model offered by Replicate and Hugging Face like Llama2 and Mistral, that you can easily fine-tune to your documentation style.
Setup Interaction: Kick off by configuring an Interaction named 'Proofread Documentation' within your Composable project. This interaction should:
content
as an input field.updated_content
and changes_summary
.Quick Code Integration: Post setup, marrying this interaction to your client code is straightforward. Using fewer than 10 lines of code, you're all set. For those curious about the nuances of embedding Interactions in Typescript/Javascript, we've prepared a detailed guide!
Clone, Sync, & Execute: Proceed to clone our repository, install cpcli
(npm install -g @composableprompts/cli
), synchronize your interaction, and subsequently, initiate the proofreading task for your file. This not only expedites the proofreading tasks but also summarize on the modifications made (to improve your commit message...), enhancing the clarity, precision, and style of your documentation.
Review the Outcome: Armed with a revised file and a change summary, you can spawn a new pull request or simply review and commit!
This serves as a testament to the ease of assimilating LLM-powered tasks into your routine automated workflows. If it's executable via the command line, its potential is boundless!
We're contemplating moving it into a generic Github Action to operationalize tasks within the build pipeline — would be very handy to execute arbitrary LLM-powered tasks as part of our build and deployment pipelines. Can't wait to get this running to proofread our next.js-powered marketing website!
Let us know if you need help to build new tools with this approach, we've be happy to discuss.