Google has been making waves in the AI space with its Gemini 2.0 models, bringing substantial upgrades to their chatbot and developer tools. With the introduction of Gemini 2.0 Flash, Gemini 2.0 Pro (experimental), and the new cost-efficient Gemini 2.0 Flash-Lite, I was eager to get hands-on experience with each of these models—and yes, I tried them all for free!
How to Get Gemini 2.0 API?
Step 1: Go to this link.
Step 2: Click on “Get a Gemini API Key”
Step 3: Now, click on “Create API Key”
Step 4: Select a project from your existing Google Cloud projects.
Step 5: Search Google Cloud Projects. This will generate the API key for your project!
Hands-on with Gemini 2.0 Flash
Gemini 2.0 Flash, initially an experimental release, is now widely accessible and integrated into various Google AI products. Having tested it through the Gemini API in Google AI Studio and Vertex AI, I found it to be a faster, more optimized version of its predecessor. While it lacks the deep reasoning abilities of the Pro model, it handles quick responses and general tasks remarkably well.
To know more checkout this blog.
Key Features I Noticed
- Improved Speed: The model is highly responsive, making it ideal for real-time applications.
- Upcoming Features: Google has announced text-to-speech and image generation capabilities for this model, which could make it even more versatile.
- Seamless Integration: Accessible through the Gemini app, Google AI Studio, and Vertex AI, making it easy to implement in various applications.
Code:
!pip install -q -U google-generativeai
import google.generativeai as genai
from IPython.display import Markdown
from google.colab import userdata
GOOGLE_API_KEY=userdata.get('GOOGLE_API_KEY')
genai.configure(api_key=GOOGLE_API_KEY)
import httpx
import base64
# Retrieve an image
image_path = "https://cdn.pixabay.com/photo/2022/04/10/09/02/cats-7122943_1280.png"
image = httpx.get(image_path)
# Choose a Gemini model
model = genai.GenerativeModel(model_name="models/gemini-2.0-flash")
# Create a prompt
prompt = "Caption this image."
response = model.generate_content(
[
{
"mime_type": "image/jpeg",
"data": base64.b64encode(image.content).decode("utf-8"),
},
prompt,
]
)
Markdown(">" + response.text)
Output:
Two cartoon cats are interacting with a large flower. The cat on the left is tan with brown stripes and is reaching out to touch a large green leaf. The cat on the right is gray with darker gray stripes and is looking up at the flower with interest. The flower has orange petals and a pale center. There are also some smooth stones at the base of the flower. The background is a light blue color.
Also Read: Gemini 2.0 Flash vs GPT 4o: Which is Better?
Testing Gemini 2.0 Pro (Experimental)
This flagship model is still in an experimental phase, but I got early access via Google AI Studio. Gemini 2.0 Pro is designed for complex reasoning and coding tasks, and it certainly lived up to the expectations.
My Takeaways
- Massive 2M Token Context Window: The ability to process large datasets efficiently is a game-changer.
- Advanced Reasoning: Handles multi-step problem-solving better than any previous Gemini model.
- Best Coding Performance: I tested it with programming challenges, and it outperformed other Gemini models in generating structured and optimized code.
- Tool Integration: The model can leverage Google Search and code execution to enhance responses.
Gemini 2.0 Pro is available now as an experimental model to developers in Google AI Studio and Vertex AI and to Gemini Advanced users in the model drop-down on desktop and mobile.
To know more checkout this blog.
Code:
pip install -q -U google-genai
from google import genai
# Replace "YOUR_API_KEY" with your actual API key
client = genai.Client(api_key="Your_API_Key")
response = client.models.generate_content(
model="gemini-2.0-pro-exp-02-05", contents="Give me code for nth fabonacci series"
)
print(response.text)
Output:
def fibonacci_nth(n):
"""
Calculates the nth Fibonacci number.
Args:
n: The index of the desired Fibonacci number (non-negative integer).
Returns:
The nth Fibonacci number. Returns 0 if n is 0. Returns 1 if n is 1.
Handles invalid input (negative numbers) by raising a ValueError.
Raises:
ValueError: if n is negative.
"""
if n raise ValueError("Input must be a non-negative integer.")
elif n return n # Base cases: F(0) = 0, F(1) = 1
else:
a, b = 0, 1
for _ in range(2, n + 1):
a, b = b, a + b # Efficiently update a and b
return bdef fibonacci_series(n):
"""
Generates the first n Fibonacci numbers.
Args:
n: The number of Fibonacci numbers to generate (non-negative integer).
Returns:
A list containing the first n Fibonacci numbers. Returns an empty list if n is 0.
Handles invalid input (negative n) by raising a ValueError.
Raises:
ValueError: if n is negative.
"""
if n raise ValueError("Input must be a non-negative integer.")
elif n == 0:
return []
elif n == 1:
return [0]
else:
series = [0, 1]
while len(series) next_fib = series[-1] + series[-2]
series.append(next_fib)
return seriesdef fibonacci_recursive(n):
"""
Calculates the nth Fibonacci number using recursion.
Args:
n: The index of the desired Fibonacci number (non-negative integer).
Returns:
The nth Fibonacci number.
Raises:
ValueError: if n is negative.
"""
if n raise ValueError("Input must be a non-negative integer.")
elif n return n
else:
return fibonacci_recursive(n - 1) + fibonacci_recursive(n - 2)def fibonacci_memoization(n, memo={}):
"""
Calculates the nth Fibonacci number using recursion with memoization.
Args:
n: The index of the desired Fibonacci number (non-negative integer)
memo: A dictionary to store previously computed Fibonacci numbers.
Returns:
The nth Fibonacci number.
Raises:
ValueError: if n is negative.
"""
if n raise ValueError("Input must be a non-negative integer.")
if n in memo:
return memo[n]
if n return n
memo[n] = fibonacci_memoization(n - 1, memo) + fibonacci_memoization(n - 2, memo)
return memo[n]def fibonacci_golden_ratio(n):
"""
Calculates the nth Fibonacci number using the Golden Ratio formula (Binet's formula).
Args:
n: The index of the desired Fibonacci number (non-negative integer).
Returns:
The nth Fibonacci number, rounded to the nearest integer. Due to floating-point
precision, this method might be inaccurate for very large values of n.
Raises:
ValueError: if n is negative.
"""
if n raise ValueError("Input must be a non-negative integer.")
phi = (1 + 5**0.5) / 2 # Golden Ratio
return round((phi**n - (1 - phi)**n) / (5**0.5))# --- Examples and Testing ---
if __name__ == "__main__":
# Test cases
test_cases = [0, 1, 2, 3, 4, 5, 6, 7, 10, 20, 30]
print("Iterative Method:")
for n in test_cases:
print(f"F({n}) = {fibonacci_nth(n)}")
print("\nSeries Method:")
for n in test_cases:
print(f"First {n} Fibonacci numbers: {fibonacci_series(n)}")
print("\nRecursive Method (Warning: Slow for larger n):")
for n in test_cases[:10]: # Limit recursive tests for performance
print(f"F({n}) = {fibonacci_recursive(n)}")
print("\nMemoization Method:")
for n in test_cases:
print(f"F({n}) = {fibonacci_memoization(n)}")
print("\nGolden Ratio Method:")
for n in test_cases:
print(f"F({n}) = {fibonacci_golden_ratio(n)}")
# Example of handling invalid input
try:
print(fibonacci_nth(-1))
except ValueError as e:
print(f"Error: {e}")
try:
print(fibonacci_series(-5))
except ValueError as e:
print(f"Error: {e}")
try:
fibonacci_recursive(-2)
except ValueError as e:
print(f"Error: {e}")
try:
fibonacci_memoization(-3)
except ValueError as e:
print(f"Error: {e}")
try:
fibonacci_golden_ratio(-4)
except ValueError as e:
print(f"Error: {e}")
# Larger value testing (iterative and memoization are much faster)
large_n = 40
print(f"\nF({large_n}) (Iterative) = {fibonacci_nth(large_n)}")
print(f"F({large_n}) (Memoization) = {fibonacci_memoization(large_n)}")
# print(f"F({large_n}) (Recursive) = {fibonacci_recursive(large_n)}") # Very slow! Avoid for large n.
print(f"F({large_n}) (Golden Ratio) = {fibonacci_golden_ratio(large_n)}")
Exploring Gemini 2.0 Flash-Lite: The Most Cost-Efficient Model
Gemini 2.0 Flash-Lite is Google’s budget-friendly AI model, offering a balance between performance and affordability. Unlike its predecessors, it provides a 1M token context window and multimodal input support while maintaining the speed of the previous 1.5 Flash model.
What Stood Out for Me?
- Ideal for Cost-Sensitive Applications: This model is a great choice for businesses or developers looking to reduce AI expenses.
- Smooth Performance: While not as powerful as Pro, it holds up well for general tasks.
- Public Preview Available: No restrictions—anyone can try it through Google AI Studio and Vertex AI.
To know more checkout this blog.
Code:
pip install -q -U google-genai
from google import genai
# Replace "YOUR_API_KEY" with your actual API key
client = genai.Client(api_key="Your_API_Key")
# Generate content with streaming
response_stream = client.models.generate_content_stream(
model="gemini-2.0-flash-lite-preview-02-05",
contents="Give me a bedtime story for my kid"
)
# Process and print the streamed response
for chunk in response_stream:
print(chunk.text, end="", flush=True) # Print each chunk as it arrives
Output:
Okay, snuggle in tight and close your eyes. Let's begin...Once upon a time, in a land filled with marshmallow clouds and lollipop trees, lived a little firefly named Flicker. Flicker wasn't just any firefly, oh no! He had the brightest, sparkliest light in the whole valley. But sometimes, Flicker was a little bit shy, especially when it came to shining his light in the dark.
As the sun began to dip behind the giggle-berry bushes, painting the sky in shades of orange and purple, Flicker would start to worry. "Oh dear," he'd whisper to himself, "It's getting dark! I hope I don't have to shine tonight."
All the other fireflies loved to twinkle and dance in the night sky, their lights creating a magical, shimmering ballet. They’d zoom and swirl, leaving trails of sparkling dust, while Flicker hid behind a big, cozy dandelion.
One night, as Flicker was hiding, he saw a little lost bunny, no bigger than his thumb, hopping around in circles. The bunny was sniffing the air and whimpering softly. “Oh dear, I'm lost!” the bunny squeaked. “And it's so dark!”
Flicker’s tiny heart thumped in his chest. He really wanted to stay hidden, but he couldn't bear to see the little bunny scared and alone. Taking a deep breath, Flicker took a leap of faith.
He flew out from behind the dandelion, and with a little *flick!*, his light shone brightly! It wasn't a big, booming light, not at first. But it was enough!
The little bunny perked up his ears and saw the glowing firefly. “Ooooh! You're shining!” the bunny cried. “Can you help me?”
Flicker, surprised by his own courage, fluttered closer and, with a gentle *flicker* and *flicker*, began to lead the bunny along a path made of glowing mushrooms. His light guided the bunny past sleepy snails and babbling brooks until, finally, they reached the bunny's cozy burrow, nestled under the roots of a giant, whispering willow tree.
The bunny turned and looked at Flicker, his eyes shining with gratitude. "Thank you!" he squeaked. "You saved me! You were so brave and your light is so beautiful."
As Flicker flew back towards the giggle-berry bushes, he felt a warm feeling spread through his little firefly body. It wasn't just the warmth of the night; it was the warmth of helping someone else.
That night, and every night after, Flicker flew with the other fireflies. He still felt a little shy sometimes, but he always remembered the little lost bunny. And because of the bunny, Flicker's light grew brighter and stronger with every *flick!*. He learned that the best way to shine is to share your light, and that even the littlest light can make a big difference in the dark.
Now close your eyes tight, little one. Dream of marshmallow clouds and sparkling fireflies. Sweet dreams. Goodnight.
Gemini 2.0 Flash Thinking Mode: A Step Towards Better AI Reasoning
This new mode enhances how Gemini handles problem-solving by explicitly displaying its thought process. Available in the Gemini app, it breaks down problems into smaller components and presents a structured reasoning approach.
My Experience with Thinking Mode
- Clear Explanations: The AI provides step-by-step breakdowns, making it easier to understand complex answers.
- More Human-Like Problem Solving: Unlike traditional models that deliver answers instantly, this mode showcases the reasoning process behind conclusions.
- Previously Limited to Developers: Now available in the Gemini app, making it more accessible for general users.
To know more checkout this blog.
Code:
%pip install -U -q "google-genai"
from google.colab import userdata
GOOGLE_API_KEY=userdata.get('GOOGLE_API_KEY')
from google import genai
from google.genai import types
client = genai.Client(
api_key=GOOGLE_API_KEY,
# Use `v1alpha` so you can see the `thought` flag.
http_options={'api_version':'v1alpha'},
)
from pprint import pprint
pprint(
client.models.get(model="gemini-2.0-flash-thinking-exp-01-21")
.model_dump(exclude_defaults=True)
)
Output:
response = client.models.generate_content(
model="gemini-2.0-flash-thinking-exp-01-21",
contents="What is SFT? Explain your reasoning step by step."
print(response.text) # Output of the LLM
print("-----")
for line in response.text.split('\n'):
if line.startswith("Reasoning:"): # Example, adapt as needed
print(line)
Output:
for part in response.candidates[0].content.parts:
display(Markdown(part.text))
Output:
Let's break down what SFT, or Supervised Fine-Tuning, is step-by-step.1. Understanding the Terms:
To understand SFT, let's dissect the name:
Supervised: Refers to learning from labeled data, similar to having a teacher provide the correct answers. In supervised learning, input data is paired with desired output data (labels).
Fine-Tuning: Implies enhancements to an existing model (pre-trained) by adjusting its parameters (weights) to improve performance on a specific task.2. Context: Pre-trained Models (Foundation Models)
SFT is commonly applied to large pre-trained models in fields like Natural Language Processing (NLP) and Computer Vision. These models, trained on vast datasets, learn general data patterns.
Pre-training Phase: Imagine a language model trained on the internet's text, learning language nuances, vocabulary, grammar, and relationships between words. This phase is crucial for providing a solid foundational knowledge.3. Introducing Supervised Fine-Tuning (SFT) - The Core Concept:
Post pre-training, a model generally understands text or images but may not excel in specific tasks. SFT addresses this by adapting the model to perform particular tasks effectively.
Goal of SFT: Adapt a pre-trained model to a specific task using a relevant, task-specific, labeled dataset.Examples of SFT:
Sentiment Analysis:
Pre-trained model: Trained on internet text.
Task: Determine sentiment (positive or negative) in movie reviews.
SFT Dataset: Movie reviews labeled as "positive" or "negative".
SFT Process: Further train the model on the movie review dataset.Question Answering:
Pre-trained model: General large language model.
Task: Answer questions based on text passages.
SFT Dataset: Dataset of (context passage, question, answer) triples.
SFT Process: Train the model on this dataset.Instruction Following (for chatbots):
Pre-trained model: General large language model.
Task: Generate responses based on user instructions.
SFT Dataset: Instructions paired with desired responses.
SFT Process: Train the model to follow instructions and generate appropriate responses.4. How SFT Works Technically:
Initialization: Start with the weights from the pre-trained model.
Task-Specific Training: Feed the model with SFT dataset.
Loss Function: Define a function to measure prediction accuracy against actual labels.
Optimization: Adjust weights using algorithms like gradient descent to minimize loss.
Smaller Learning Rate: Use a reduced learning rate to fine-tune the model subtly.5. Benefits of SFT:
Improved Specific Task Performance: Enhances model accuracy on fine-tuned tasks.
Faster Training: Leverages existing knowledge, requiring less data and time than training from scratch.
Task Specialization: Tailors general-purpose models to excel in specific areas.
Alignment with Desired Behavior: Ensures models act in accordance with user expectations.6. SFT vs. Other Fine-Tuning Methods:
Compared to other techniques like Reinforcement Learning from Human Feedback (RLHF), SFT is often simpler and less computationally intensive, making it an initial step before applying more complex methods like RLHF.7. Summary - Step-by-Step Reasoning:
Start with a broadly knowledgeable pre-trained model.
Identify the need for specialization in specific tasks.
Utilize task-specific labeled data to guide the model's learning.
Fine-tune the model's existing weights to adapt it to new tasks.
Optimize performance through targeted adjustments and loss minimization.
Achieve enhanced task-specific performance, leveraging foundational knowledge.In essence, SFT transforms a broadly capable "student" into a specialized expert in a targeted field through focused and supervised learning.
Which Model is Right for You?
Each of these Gemini 2.0 models caters to different use cases. Here’s a quick comparison based on my hands-on testing:
Model | Best For | Context Window | Availability |
Gemini 2.0 Flash | High-volume, high-frequency tasks at scale | 1M Tokens | Public |
Gemini 2.0 Pro (Exp.) | Complex tasks, coding, & deep reasoning | 2M Tokens | Google AI Studio, Vertex AI |
Gemini 2.0 Flash-Lite | Cost-sensitive applications, efficiency | 1M Tokens | Public Preview |
Having tested all the latest Gemini 2.0 models, it’s clear that Google is making significant strides in AI development. Each model serves a unique purpose, balancing speed, cost, and reasoning capabilities to cater to different user needs.
- For real-time, high-frequency tasks, Gemini 2.0 Flash is a solid choice, offering impressive speed and seamless integration.
- For complex problem-solving, coding, and deep reasoning, Gemini 2.0 Pro (Experimental) stands out with its 2M token context window and advanced tool integration.
- For cost-conscious users, Gemini 2.0 Flash-Lite provides an affordable yet powerful alternative without compromising too much on performance.
- For better explainability in AI, the Thinking Mode introduces a structured reasoning approach, making AI outputs more transparent and understandable.
Also Read: Google Gemini 2.0 Pro Experimental Better Than OpenAI o3-mini?
Conclusion
Google’s commitment to innovation in AI is evident with these models, offering developers and businesses more options to leverage cutting-edge technology. Whether you’re a researcher, an AI enthusiast, or a developer, the free access to these models provides a fantastic opportunity to explore and integrate state-of-the-art AI solutions into your workflow.
With continued improvements and upcoming features like text-to-speech and image generation, Gemini 2.0 is shaping up to be a major player in the evolving AI landscape. If you’re considering which model to use, it all comes down to your specific needs: speed, intelligence, or cost-efficiency—and Google has provided a compelling option for each.