Loading...
Loading...
Enables Claude to use Google AI Studio for testing prompts, exploring models, and prototyping AI applications
npx skill4agent add andrejones92/canifi-life-os google-ai-studiocurl -sSL https://canifi.com/skills/google-ai-studio/install.sh | bashcp -r skills/google-ai-studio ~/.canifi/skills/# First, ensure canifi-env is installed:
# curl -sSL https://canifi.com/install.sh | bash
canifi-env set GOOGLE_EMAIL "your-email@gmail.com"canifi-env set SERVICE_EMAIL "your-email"
canifi-env set SERVICE_PASSWORD "your-password"User: "Test this prompt in AI Studio with different temperatures"
Claude: Opens AI Studio, enters prompt, tests at temp 0.2, 0.7, 1.0.
Returns: "Results comparison:
Low temp: More focused, deterministic
High temp: More creative, varied..."User: "Get the API code for my prompt in Python"
Claude: Opens AI Studio, configures prompt, exports code.
Returns Python code snippet ready for integration.User: "Test getting JSON output from Gemini"
Claude: Opens AI Studio, configures structured output schema,
tests prompt. Returns: "Structured output working, here's the schema..."User: "Compare Gemini Pro vs Gemini Flash for my use case"
Claude: Tests same prompt on both models, compares:
Speed, quality, cost considerations.
Returns: "Recommendation: Flash for speed, Pro for complex reasoning"1. Navigate to aistudio.google.com
2. Create new prompt or open existing
3. Configure model and parameters
4. Enter prompt content
5. Run and evaluate output
6. Iterate on prompt
7. Export code or save template// New prompt button
'[aria-label="Create new prompt"]'
// Model selector
'.model-selector' or '[aria-label="Model"]'
// Temperature slider
'[aria-label="Temperature"]'
// Max tokens
'[aria-label="Maximum output tokens"]'
// Prompt input
'.prompt-input' or '[aria-label="Prompt"]'
// Run button
'[aria-label="Run"]'
// Output panel
'.output-panel'
// Get code button
'[aria-label="Get code"]'
// Save button
'[aria-label="Save"]'
// History panel
'.history-panel'Temperature: 0.0-2.0 (creativity vs consistency)
Max Output Tokens: Up to model limit
Top-P: 0.0-1.0 (nucleus sampling)
Top-K: Number of tokens to consider
Stop Sequences: Strings to stop generation
Safety Settings: Content filtering levels