Loading...
Loading...
Serverless GDS sessions on Neo4j Aura — covers GdsSessions, AuraAPICredentials, DbmsConnectionInfo, SessionMemory, get_or_create, remote graph projection, gds.graph.project.remote, gds.graph.construct, algorithm execution (mutate/stream/write), async job polling, result retrieval, and session lifecycle. Use when running graph algorithms on Aura Business Critical or VDC, processing graph data from Pandas/Spark, or using the graphdatascience Python client in AGA (serverless) mode. Covers all three data source three source modes (AuraDB-connected, self-managed Neo4j, standalone from DataFrames). Does NOT cover the embedded GDS plugin on Aura Pro or self-managed Neo4j — use neo4j-gds-skill. Does NOT handle Cypher authoring — use neo4j-cypher-skill. Does NOT cover Snowflake Graph Analytics — use neo4j-snowflake-graph-analytics-skill.
npx skill4agent add neo4j-contrib/neo4j-skills neo4j-aura-graph-analytics-skillneo4j-gds-skillneo4j-gds-skillneo4j-cypher-skillneo4j-snowflake-graph-analytics-skill| Deployment | Skill |
|---|---|
| Aura Free | ❌ AGA not available |
| Aura Pro | |
| Aura Business Critical | this skill |
| Aura Virtual Dedicated Cloud | this skill |
| Non-Neo4j data (Pandas, Spark) | this skill (standalone mode) |
graphdatascience >= 1.15>= 1.18gds.verify_connectivity()gds.delete()sessions.delete(name)AuraAPICredentials.from_env()pip install "graphdatascience>=1.15"import os
from graphdatascience.session import AuraAPICredentials, GdsSessions
sessions = GdsSessions(api_credentials=AuraAPICredentials.from_env())
# Reads: AURA_CLIENT_ID, AURA_CLIENT_SECRET, AURA_PROJECT_ID (optional)
# Create API credentials in Aura Console → Account → API credentialsAURA_PROJECT_IDproject_id=from graphdatascience.session import AlgorithmCategory, SessionMemory
memory = sessions.estimate(
node_count=1_000_000,
relationship_count=5_000_000,
algorithm_categories=[
AlgorithmCategory.CENTRALITY,
AlgorithmCategory.NODE_EMBEDDING,
AlgorithmCategory.COMMUNITY_DETECTION,
],
)
# Returns a SessionMemory tier, e.g. SessionMemory.m_8GB
# Fixed tiers: m_2GB … m_256GB — see references/limitations.mdfrom graphdatascience.session import DbmsConnectionInfo, SessionMemory, CloudLocation
from datetime import timedelta
db_connection = DbmsConnectionInfo(
username=os.environ["NEO4J_USERNAME"],
password=os.environ["NEO4J_PASSWORD"],
aura_instance_id=os.environ["AURA_INSTANCEID"], # from Aura Console URL
)
gds = sessions.get_or_create(
session_name="my-analysis",
memory=memory,
db_connection=db_connection,
ttl=timedelta(hours=2),
)
gds.verify_connectivity()db_connection = DbmsConnectionInfo(
uri=os.environ["NEO4J_URI"], # e.g. "bolt://my-server:7687"
username=os.environ["NEO4J_USERNAME"],
password=os.environ["NEO4J_PASSWORD"],
)
gds = sessions.get_or_create(
session_name="my-analysis-sm",
memory=SessionMemory.m_8GB,
db_connection=db_connection,
ttl=timedelta(hours=2),
cloud_location=CloudLocation("gcp", "europe-west1"),
)
gds.verify_connectivity()gds = sessions.get_or_create(
session_name="my-standalone",
memory=SessionMemory.m_4GB,
ttl=timedelta(hours=1),
cloud_location=CloudLocation("gcp", "europe-west1"),
)
gds.verify_connectivity()get_or_create()G, result = gds.graph.project(
"my-graph",
"""
CALL () {
MATCH (p:Person)
OPTIONAL MATCH (p)-[r:KNOWS]->(p2:Person)
RETURN p AS source, r AS rel, p2 AS target,
p {.age, .score} AS sourceNodeProperties,
p2 {.age, .score} AS targetNodeProperties
}
RETURN gds.graph.project.remote(source, target, {
sourceNodeLabels: labels(source),
targetNodeLabels: labels(target),
sourceNodeProperties: sourceNodeProperties,
targetNodeProperties: targetNodeProperties,
relationshipType: type(rel)
})
""",
)
print(f"Projected {G.node_count()} nodes, {G.relationship_count()} relationships")CALL () { ... }UNIONCALLimport pandas as pd
nodes_df = pd.DataFrame([
{"nodeId": 0, "labels": "Person", "age": 30},
{"nodeId": 1, "labels": "Person", "age": 25},
])
rels_df = pd.DataFrame([
{"sourceNodeId": 0, "targetNodeId": 1, "relationshipType": "KNOWS"},
])
G = gds.graph.construct("my-graph", nodes_df, rels_df)
# Multiple DataFrames: gds.graph.construct("g", [nodes1, nodes2], [rels1, rels2])nodeIdlabelssourceNodeIdtargetNodeIdrelationshipTypeconstruct()# Mutate — chain results without writing to DB
gds.pageRank.mutate(G, mutateProperty="pagerank", dampingFactor=0.85)
gds.fastRP.mutate(G,
mutateProperty="embedding",
embeddingDimension=128,
featureProperties=["pagerank"],
randomSeed=42,
)
# Stream — inspect results as DataFrame
df = gds.pageRank.stream(G)
print(df.sort_values("score", ascending=False).head(10))
# Write — persist to connected Neo4j DB (connected modes only)
gds.louvain.write(G, writeProperty="community")neo4j-gds-skillimport time
job = gds.pageRank.mutate(G, mutateProperty="pagerank")
# If job object returned (async mode), poll explicitly:
if hasattr(job, "status"):
while job.status() not in ("RUNNING_DONE", "FAILED", "CANCELLED"):
time.sleep(5)
print(f"Job status: {job.status()}")
if job.status() != "RUNNING_DONE":
raise RuntimeError(f"Algorithm job failed: {job.status()}").status()# Stream node properties — one column per property
result_df = gds.graph.nodeProperties.stream(
G,
node_properties=["pagerank", "embedding"],
separate_property_columns=True,
db_node_properties=["name"], # pull from connected DB for context (connected modes only)
)
result_df.head(10)db_node_propertiesresult_df = gds.graph.nodeProperties.stream(G, ["pagerank"], separate_property_columns=True)
result_df.merge(nodes_df[["nodeId", "name"]], how="left")# Write multiple node properties to connected Neo4j
gds.graph.nodeProperties.write(G, ["pagerank", "embedding"])
# Write relationship properties
gds.graph.relationshipProperties.write(G, G.relationship_types(), ["score"])
# Run Cypher against connected DB from within session
gds.run_cypher("MATCH (n:Person) RETURN count(n)")
# Drop projected graph (frees session memory)
G.drop()
# Delete session — stops billing
sessions.delete(session_name="my-analysis")
# or: gds.delete()# List active sessions
from pandas import DataFrame
DataFrame(sessions.list())
# Reconnect to existing session
gds = sessions.get_or_create(session_name="my-analysis", memory=..., db_connection=...)| Error | Cause | Fix |
|---|---|---|
| Wrong | Regenerate in Aura Console → Account → API credentials |
| Session expired (TTL exceeded) or name typo | |
| Projection dropped or session reconnected without re-projecting | Re-run |
Algorithm job | Memory limit exceeded or unsupported algorithm | Increase |
| Graph larger than estimated | Re-estimate with actual counts; pick next tier up |
| Results empty after session reconnect | Results not written before session was closed | Always write/stream before |
| String column in nodes DataFrame | Drop string columns before |
| AGA feature not activated | Enable in Aura Console → project settings |
| Need | URL |
|---|---|
| AGA Python client docs | |
| AuraDB tutorial notebook | |
| GDS algorithm reference | |
AURA_CLIENT_IDAURA_CLIENT_SECRETsessions.estimate(...)gds.verify_connectivity()RUNNING_DONEsessions.delete(...)gds.delete()