Llm

Building Natural Language Interfaces with LLM Function Calling

Large Language Models (LLMs) are good at generating coherent text, but they have few inherent limitations:

  1. Hallucinations: They learn and generate information in terms of likelihood and may produce information that is not grounded in facts
  2. Knowledge Cutoff: LLMs are trained on a fixed dataset and do not have access to real-time information or the ability to perform complex tasks like web browsing or executing code.
  3. Abstraction and Reasoning: LLMs may struggle with abstract reasoning and complex tasks that require logical steps or mathematical operations. Their output is not precise enough for tasks with fixed rule-sets without interfacing with external tools

There are two ways to address these limitations:

Building a Codenames AI Assistant with Multi-Modal LLMs

Codenames is a word association game where two teams guess secret words based on one-word clues. The game involves a 25-word grid, with each team identifying their words while avoiding the opposing team’s words and the “assassin” word.

I knew that word embeddings could be used to group words based on their semantic similarity. This seemed like a good way to cluster words on the board and generate clues. I was largely successful in getting this to work along with few surprises and learnings along the way.