How to use AI in Python with LMQL?

LMQL is a Python implementation of a SQL-like language for machine learning described in the “Prompting Is Programming: A Query Language for Large Language Models” research paper. The library allows writing prompts for Large Language Models in the form of SQL queries. In addition to building the prompt, the library can also check constraints for the generated text and pass variables between multiple AI interactions within a single query.

To use the query language in Python, we have to install the library and an implementation providing an AI model. LMQL supports OpenAI API and Hugging Face Transformers. Because OpenAI works better, I will use the OpenAI API.

import os
import asyncio
import lmql


os.environ['OPENAI_API_KEY'] = "YOUR OPEN AI KEY"

LMQL Query

In the LMQL examples, we can find the perfect query to illustrate all elements of the query language:

argmax
   """A list of good dad jokes. A indicates the punchline
   Q: How does a penguin build its house?
   A: Igloos it together.
   Q: Which knight invented King Arthur's Round Table?
   A: Sir Cumference.
   Q:[JOKE]
   A:[PUNCHLINE]"""
from
   'openai/text-davinci-003'
where
   len(JOKE) < 120 and STOPS_AT(JOKE, "?") and
   STOPS_AT(PUNCHLINE, "\n") and len(PUNCHLINE) > 1

The argmax instruction tells the library to use the token decoder that selects the most probable token at each step. argmax decoding is deterministic (as long as we use the same version of the model). The library provides two other decoder implementations.

Next, we see the string with the prompt. Every word inside the square brackets is a placeholder. The LMQL library will split the prompt into multiple AI interactions. LMQL sends the prompt fragment before the placeholder to the AI model. The response will replace the placeholder in the prompt. If a prompt contains multiple placeholders, the library will repeat the process until all placeholders get replaced.

The from instruction tells the library which AI model to use. In this case, we use the openai/text-davinci-003 model.

The where instruction contains the constraints of the generated text. In the case of open-source models, the library can influence the generated tokens by creating a mask for blocking invalid tokens. However, when we use the OpenAI implementation, LMQL can only validate the response from the API and reject it if it does not satisfy the constraints.

The contraints feature is explained in the documentation as well as in the research paper mentioned earlier:

LMQL constraints are evaluated eagerly on each generated token, and will be used by the runtime to generate token masks during generation. This means, that the provided constraints are either satisfied by directly guiding the model during generation appropriately or, if this is not possible, validation will fail early on during generation, saving the cost of generating invalid output.

Using LMQL in a Python Function

The simplest way to use LMQL in Python is to implement a function with the @lmql.query decorator. In the function body, we have to put a single string containing the LMQL query. The decorator will provide all of the required code.

@lmql.query
async def explain(dad_joke):
   '''lmql
   argmax
   """Explain the dad joke: {dad_joke}
   Explanation: [EXPLANATION]"""
   from
   "openai/text-davinci-003"
   '''

In the function, we used an additional feature of passing variables from the Python code. To do so, we must put the variable name inside the curly brackets.

When we call the function, we get an LMQLResult object containing the entire prompt and the variables generated by the model to replace the placeholders:

await explain("""My dad quit his job to pursue his dream in archeology.

His career is now in ruins.""")

Note the list enclosing the response object!

[LMQLResult(prompt='Explain the dad joke: My dad quit his job to pursue his dream in archeology.\n\nHis career is now in ruins.\nExplanation: \nThis is a pun on the word "ruins," which can refer to both the remains of an ancient civilization and a situation that has been destroyed or ruined. The joke is that by quitting his job to pursue his dream in archeology, his career has been "ruined" or destroyed.', variables={'EXPLANATION': '\nThis is a pun on the word "ruins," which can refer to both the remains of an ancient civilization and a situation that has been destroyed or ruined. The joke is that by quitting his job to pursue his dream in archeology, his career has been "ruined" or destroyed.'}, distribution_variable=None, distribution_values=None)]

Using LMQL for Text Classification

We can use LMQL to write a Chain-of-Thought prompt for text classification. When we use such a prompt engineering technique, we instruct the AI to generate an observation about the input first and later use the observation to choose the final answer. We will create the prompt in the zero-shot variant, where we don’t provide an example.

@lmql.query
async def classify_review(review):
  '''argmax
    """Review: {review}\n
    Q: What is the underlying sentiment of this review and why?\n
    A:[ANALYSIS]\n
    Based on this, the overall sentiment of the message can be considered to be [CLASSIFICATION]"""
  from
    "openai/text-davinci-003"
  WHERE
    CLASSIFICATION in ["positive", "neutral", "negative"]
  '''

The prompt contains one variable and two placeholders. The variable passes the review from the function arguments to the prompt. The first placeholder instructs the model to analyze the content of the review. The second placeholder is the final classification. The WHERE clause specifies what classification responses are allowed.

When we call the function, we get the following result:

result = await classify_review("""Schweinsbraten schlecht, Kässpatzen ohne Geschmack mit blassen nicht krossen Röstzwiebeln.
Der Kaiserschmarren war dann der  Gipfel , in 5 Min fertig und  einfach nur mies !!!!
Alles in Allem eine Frechheit!!!!

Nie wieder!!!!!!""")
{'ANALYSIS': ' The underlying sentiment of this review is one of extreme dissatisfaction. The reviewer is very unhappy with the quality of the food, describing the Schweinsbraten as "schlecht" (bad), the Kässpatzen as having "ohne Geschmack" (no taste) and the Kaiserschmarren as "mies" (awful). The reviewer expresses their dissatisfaction by saying "Alles in Allem eine Frechheit!" (All in all, a disgrace!) and concludes with "Nie wieder!" (Never again!).',
 'CLASSIFICATION': 'negative'}

Using LMQL to interact with tools

Similarly to other AI libraries, LMQL allows AI to interact with tools.

Just like Langchain, it will let the AI generate the text until the model generates a token denoting the start of an interaction with a tool. At this point, the library will intercept the response, call a Python function, append the response as a string to the prompt and call the AI model again to finish generating the text.

Let’s write a function to extract the named entity from an article’s title and search for information about the entity on the web.

When we want to use tools, the prompt requires using at least one-shot learning technique. Perhaps, you will need more than one example to show the AI model what we want.

def find(noun):
      ... # find the information about the noun on the web
      return result_from_the_web

@lmql.query
async def what_is_it_about(title):
  '''argmax
        # one-shot learning
        "Title: How to determine the partition size in Apache Spark\n"
        "Named entity: Apache Spark\n"
        "Definition: Apache Spark is an open-source unified analytics engine for large-scale data processing.\n"
        "Result: Apache Spark is an open-source unified analytics engine for large-scale data processing.\n"
        "\n"
        # prompt template
        "Title: {title}\n"
        "Named Entity: [NAMED_ENTITY]"
        "Definition: {find(NAMED_ENTITY)}\n"
        "Result: [RESULT]"
  from
        'openai/text-davinci-003'
  where
        STOPS_AT(NAMED_ENTITY, "\n") and
        STOPS_AT(RESULT, "\n")'''

In the prompt, we see the one-shot learning example. The template contains two placeholders: NAMED_ENTITY and RESULT. When the AI model generates the NAMED_ENTITY, LMQL will pass the generated value to the find function and put the response in the prompt after the Definition: keyword. The RESULT placeholder will be replaced with the text generated by the AI model. In the example above, the model should return the exact text as the definition.

In the where clause, we instruct the model to stop generating the NAMED_ENTITY and RESULT when it produces a new line character.

Let’s call the function:

result = (await what_is_it_about("How to restart a stuck DAG in Apache Airflow"))

When we print the prompt, we see that the model correctly recognized the named entity, and the library retrieved the information about the entity from the web. In the end, the model returned the definition as the result.

print(result[0].prompt)
Title: How to determine the partition size in Apache Spark
Named entity: Apache Spark
Definition: Apache Spark is an open-source unified analytics engine for large-scale data processing.
Result: Apache Spark is an open-source unified analytics engine for large-scale data processing.

Title: How to restart a stuck DAG in Apache Airflow
Named Entity:  Apache Airflow
Definition: Apache Airflow is an open-source workflow management platform for data engineering pipelines.
Result:  Apache Airflow is an open-source workflow management platform for data engineering pipelines.

We can get the result from the variables attribute:

result[0].variables
{'NAMED_ENTITY': ' Apache Airflow\n',
 'RESULT': ' Apache Airflow is an open-source workflow management platform for data engineering pipelines.'}

The model’s response isn’t clean. It contains additional spaces and new lines. We can choose whether we want to deal with such issues in the where clause making it more complicated and less readable, or we can clean the response in Python code. Personally, I prefer to keep the where clause simple and fix the answer in Python.


Do you need help building AI-powered applications for your business?
You can hire me!

Older post

Which index should you use while building an application with LlamaIndex?

Which Llama index should you use? When is it better to use GPTVectorStoreIndex, GPTListIndex, GPTKeywordTableIndex, or GPTKnowledgeGraphIndex?

Newer post

Use OpenAI API Function Calling to Build a Chatbot for Slack with Access to a REST API (updated for OpenAI SDK version 1.1.1+)'

Build an AI-powered chatbot that can interact with REST API using the Function Calling feature of OpenAI Completion API. Updated to cover the changes introduced after OpenAI DevDay 2023!