Introduction:
Imagine a world where your Python scripts aren’t just lines of code, but living entities capable of understanding and responding to your needs. This isn’t science fiction; it’s the reality of integrating powerful Large Language Models (LLMs) like Ollama into your local Python environment. By combining the versatility of Python with the intelligence of Ollama, you can unlock a new dimension of creativity and automation
Why Ollama?
Ollama is a revolutionary platform that allows you to run powerful LLMs locally.Ollama empowers developers with unparalleled control and flexibility when working with LLMs. By running these powerful models locally, you gain greater privacy, reduce costs, and accelerate development. Embrace the possibilities of Ollama and unlock the full potential of AI in your own projects.
Integrating Ollama with Python
To bring this vision to life, we’ll leverage the Ollama Python library. This library provides a straightforward interface for interacting with Ollama models from within your Python code.
1. Installation :
pip install ollama
2. Setting up Ollama :
Ensure you have Ollama installed and running locally. You can download and install Ollama from the official website. you can check it with following commands.
ollama list
Output like below where phi3.5 model is displayed.
Writing Your Python Code:
here is simple example of python code connecting to your Ollama’s LLM through python code.
# Import libraries
import ollama
def get_user_input():
"""Prompts the user for a question and returns their input."""
question = input("Ask me anything: ")
return question
messages = [
{
'role': 'system',
'content': 'Your Name is SYS1, and you limit your response to 50 words'
},
{
'role': 'user',
'content': get_user_input()
}
]
response = ollama.chat(model='phi3.5:latest', messages=messages)
print(response['message'] ['content'])
Running the script and we can see that it is working properly
Expanding Your Horizons:
This is just the beginning. You can extend this integration to next level.