Skip to contents

Configure environment variables before loading the package. See this README section for complete details on required and optional settings.

Introduction

This quickstart guide demonstrates tool usage and advanced conversation patterns with artgemini.


Tool Usage Workflows

Using Built-in Tools

gemini_with_tools() enables Gemini’s built-in tools that execute automatically within the API call.

Google Search Grounding: Ground responses with real-time web information:

gemini_with_tools(
  "What are the current trends in digital art in 50 words or less?",
  tools = list(tool_google_search())
)

Code Execution: Execute Python code for calculations:

gemini_with_tools(
  "Generate the first 20 Fibonacci numbers",
  tools = list(tool_code_execution())
)

Combining Tools:

gemini_with_tools(
  "Search for the current price of gold and calculate 10% markup",
  tools = list(tool_google_search(), tool_code_execution())
)

Custom Functions

Define custom functions for external tool integration:

# Define a function
weather_func <- tool_function(
  name = "get_weather",
  description = "Get current weather for a location",
  parameters = list(
    type = "object",
    properties = list(
      location = list(type = "string", description = "City name"),
      units = list(
        type = "string",
        description = "Temperature units",
        enum = list("celsius", "fahrenheit")
      )
    ),
    required = list("location")
  )
)

# Use in request
resp <- gemini_with_tools(
  "What's the weather in Tokyo?",
  tools = list(weather_func),
  temp = 0
)

The model returns JSON from the function call for you to parse and execute.

Continuing Conversations

For manual conversation management, use gemini_continue():

# Start conversation with gemini_with_tools or gemini_generate
resp1 <- gemini_with_tools(
  prompt = "Tell me about Renaissance art in 50 words.",
  tools = list(tool_google_search())
)

# Continue - history preserved via contents_history attribute
resp2 <- gemini_continue(resp1, "Who are the top-5 influential Renaissance artists?")

# Continue further
resp3 <- gemini_continue(resp2, "Tell me about Leonardo da Vinci's techniques.")

Alternative: Interactive Chat

For truly interactive workflows, use gemini_chat() which maintains state internally:

# Create chat object
chat <- gemini_chat(sys_prompt = "You are an art expert.")

# Multi-turn conversation (state maintained internally)
chat$chat("What is Baroque art?")
chat$chat("How does it differ from Renaissance?")
chat$chat("Name 3 Baroque painters.")

# Streaming for real-time UI
stream <- chat$stream("Tell me a story about an artist.")
coro::loop(for (chunk in stream) cat(chunk))

Context Caching

Reduce costs by caching frequently-used context:

# Create a cache with large context (requires 4096+ estimated tokens)
cache_obj <- gemini_cache_create(
  userdata = list(guide = large_style_guide_text),
  ttl_seconds = 3600,
  displayName = "art-style-guide"
)

# Use the cached context
resp <- gemini_chat_cached(
  "Based on the style guide, what are the key criteria?",
  cache = cache_obj$name
)

# Clean up
gemini_cache_delete(cache_obj$name)