Getting Started¶
Installation¶
All Installation Options¶
| Extra | Adds | Use Case |
|---|---|---|
aime-loc |
Core SDK | AI model scanning |
[eeg] |
MNE, scipy | EEG loading, preprocessing |
[viz] |
matplotlib, plotly | Charts, figures |
[eeg,viz] |
MNE, scipy, matplotlib | Full EEG workflow |
[eeg,realtime] |
MNE, scipy, pylsl | Real-time EEG streaming |
[mcp] |
fastmcp | MCP server for AI agents |
[all] |
Everything | Full toolkit |
Get Your API Key¶
- Visit aime-loc.com/signup
- Choose your tier:
- Free: 5 scans/month (great for trying it out)
- Academic: $99/month, 50 scans (universities, students)
- Lab: $299/month, 500 scans (research labs)
- Enterprise: $999/month, 10,000 scans (AI companies)
- Copy your API key (format:
sk-aime-{tier}_{32chars})
Authentication¶
Path 1: AI Model Analysis¶
Your First Scan¶
from aime_loc import LOC
loc = LOC()
# Scan a model (26 questions, ~2 minutes)
profile = loc.scan("meta-llama/Llama-4-Scout")
# View the result
print(profile)
# CognitiveProfile(meta-llama/Llama-4-Scout, TC=14.20%)
print(profile.summary())
# meta-llama/Llama-4-Scout: TC=14.20% (best: Emotion 19.97%, worst: Intuition 11.27%)
# Per-function scores
for func, score in profile.tc_by_function().items():
print(f" {func}: {score:.2f}%")
Visualize¶
profile.radar_chart() # 13-function radar
profile.bar_chart() # Per-function bars
profile.export_figure("profile.png", journal="nature") # Publication-ready
Compare Two Models¶
Export Data¶
profile.to_json("profile.json") # JSON
profile.to_csv("scores.csv") # CSV
print(profile.to_latex()) # LaTeX table for papers
Next: Scanning Models | Training Audits | Batch Benchmarking
Path 2: EEG Analysis¶
Your First EEG Score¶
from aime_loc import LOC
from aime_loc.eeg import EEG
loc = LOC()
eeg = EEG(loc)
# Load any EEG format (auto-detected)
recording = eeg.load("subject01.set") # EEGLAB
# recording = eeg.load("subject01.edf") # EDF
# recording = eeg.load("subject01.vhdr") # BrainVision
# Preprocess (sensible defaults)
recording.preprocess()
# Extract PSD epochs
epochs = recording.extract_epochs(duration=2.0)
print(epochs)
# EpochSet(n_epochs=450, freq_range=0.5-45.0 Hz)
# Score via API (server-side TC scoring)
profile = eeg.score(epochs)
print(profile)
# EEGCognitiveProfile(sub-01, TC=23.40%)
# Visualize
profile.radar_chart()
Consumer Devices¶
# Load from consumer EEG headset with built-in presets
recording = eeg.load("meditation.csv", device="muse", sfreq=256)
recording.preprocess()
epochs = recording.extract_epochs()
profile = eeg.score(epochs)
Supported devices: muse, openbci_cyton, emotiv_epoc, neurosity, gtec_unicorn
Multi-Subject Study¶
from pathlib import Path
session = eeg.session()
for f in Path("data/").glob("sub-*/eeg/*.set"):
rec = eeg.load(f)
rec.preprocess()
epochs = rec.extract_epochs()
session.add(epochs, subject=f.parent.parent.name, task=f.stem)
results = eeg.score_session(session)
results.summary_table()
results.export_csv("study_results.csv")
EEG Visualization¶
from aime_loc.eeg.viz import psd_plot, timeseries_plot, cognitive_radar
# PSD with mean + standard deviation
psd_plot(epochs, save="psd.png")
# Power time series
timeseries_plot(epochs, profile, save="timeseries.png")
# 13-axis cognitive radar
cognitive_radar(profile, save="eeg_radar.png", journal="nature")
Next: EEG Quick Start | Consumer Devices | Cross-Substrate Comparison
Path 3: Cross-Substrate Comparison¶
The most powerful feature of AIME LOC — comparing human and AI cognitive profiles using the same framework.
# Score a human EEG recording
human = eeg.score(epochs)
# Score an AI model
llm = loc.scan("meta-llama/Llama-4-Scout")
# Overlay on the same radar chart
from aime_loc.eeg.viz import cognitive_radar
cognitive_radar([human, llm], save="human_vs_ai.png")
Next: Cross-Substrate Comparison
Context Manager¶
with LOC() as loc:
profile = loc.scan("meta-llama/Llama-4-Scout")
profile.radar_chart()
# Connection automatically closed