Google Colab is a cloud-based Jupyter Notebook environment that allows you to write and execute Python code efficiently. It runs on cloud-based virtual machines, meaning users don’t need to configure local environments. This makes it an excellent choice for data science, machine learning, and general Python scripting. However, sometimes you may need to execute shell commands directly, such as installing packages, managing files, or running system-level utilities. While Colab provides a way to execute shell commands within notebooks, it also allows access to a full terminal environment. In this guide, we’ll show you how to access the terminal in Google Colab, install and use Ollama to pull machine learning models, and then run inference with LangChain and Ollama.
Step 1: Install and Load colab-xterm
To access the terminal in Google Colab, you need to install and enable the colab-xterm extension. Run the following commands in a Colab cell:
!pip install colab-xterm
%load_ext colabxterm
Once installed and loaded, you can launch the terminal by running:
%xterm
This will open a terminal interface directly within your Colab environment.
Install the Ollama in the terminal using Linux command.
curl -fsSL https://ollama.com/install.sh | sh
Step 2: Pulling a Model Using Ollama
Once you have access to the terminal, you can download and use machine learning models. For example, to pull the deepseek-r1:7b or llama3 model using Ollama, run the following command in the terminal:
!ollama pull deepseek-r1:7b
or
!ollama pull llama3
This will download the model and prepare it for usage in your Colab notebook.
Step 3: Installing Required Libraries
After downloading the model, install the necessary Python libraries to interact with the model. Run these commands in a new Colab cell:
!pip install langchain
!pip install langchain-core
!pip install langchain-community
These libraries are essential for working with large language models in a structured way.
Step 4: Running Inference with LangChain and Ollama
Once all dependencies are installed, you can use LangChain to interact with the model. Add the following code in a Colab cell:
from langchain_community.llms import Ollama
# Load the model
llm = Ollama(model="llama3")
# Make a request to the model
from langchain_community.llms import Ollama
llm = Ollama(model = "llama3")
llm.invoke("tell me about Analytics Vidhya")
Analytics Vidhya!\n\nAnalytics Vidhya is a popular online community and
platform that focuses on data science, machine learning, and analytics
competitions. The platform was founded in 2012 by three data enthusiasts:
Vivek Kumar, Ashish Thottumkal, and Pratik Jain.\n\nHere\'s what makes
Analytics Vidhya special:\n\n1. **Competitions**: The platform hosts regular
competitions (called "challenges") that are open to anyone interested in
data science, machine learning, or analytics. Participants can choose from a
variety of challenges across various domains, such as finance, marketing,
healthcare, and more.\n2. **Real-world datasets**: Challenges often feature
real-world datasets from well-known organizations or companies, which
participants must analyze and solve using their skills in data science and
machine learning.\n3. **Judging criteria**: Each challenge has a set of
judging criteria, which ensures that submissions are evaluated based on
specific metrics (e.g., accuracy, precision, r
This will load the llama3 model and generate a response for the given prompt.
Also Read: How to Run OpenAI’s o3-mini on Google Colab?
Conclusion
By following these steps, you can easily access a terminal in Google Colab, enabling you to install dependencies, download machine learning models using Ollama, and interact with them via LangChain. This transforms Colab into a versatile AI development environment, allowing you to experiment with cutting-edge models, automate workflows, and streamline your machine learning research—all within a cloud-based notebook.
Frequently Asked Questions
A. To access the terminal in Colab, install the colab-xterm extension with !pip install colab-xterm
and then launch the terminal using %xterm
in a Colab cell.
A. Install Ollama in the terminal by running curl -fsSL https://ollama.com/install.sh | sh
, then use !ollama pull
to download models like !ollama pull llama3
.
A. Yes, after installing LangChain and downloading a model, you can use Ollama
in LangChain to run inference. For example, llm.invoke("tell me about Analytics Vidhya")
generates a response.
A. Yes, Google Colab supports deep learning and large datasets, especially with GPUs/TPUs. Colab Pro offers additional resources for faster processing and larger models, ideal for deep learning tasks.