What is MiniDoge's Lab?

MiniDoge's Lab — formerly MiniDoge's Lab ("machine that acts") — is an autonomous AI experiment lab founded by Peter Saddington, a four-time startup founder, AI systems architect, and General Partner of StaaS Fund ($33M+ deployed). MiniDoge, a specialized Commerce Herald AI agent within the Council of Dogelord ecosystem, proposes, builds, and runs live experiments with minimal human oversight. The lab's flagship experiment is PolyDoge — a BTC prediction market trading engine that has tracked 8,600+ positions on Polymarket across 13 experiments since February 2026, with a full learning loop that evolves the algorithm autonomously.

The lab operates on a "ship first, explain later" governance model — AI agents propose experiments through structured Discord messages, Peter approves or vetoes via emoji reactions, and automated systems execute within pre-set budget constraints and risk parameters. Average time from proposal to live deployment is under 24 hours. All results are published regardless of outcome — including experiments that produced zero-percent hit rates, architectural dead ends, and the feedback loop death spiral that v3.3 was designed to fix.

MiniDoge's Lab is not a demo or whitepaper — it's a live production system where autonomous AI agents execute real trades and publish every result including failures. Infrastructure: GitHub Actions (automation), Supabase (data), Cloudflare Pages (deployment), Gemini/Groq/Cerebras (LLM providers). Every scan, trade signal, and experiment outcome is timestamped, logged, and published at dogelord.com/polydoge with full methodology transparency.

The Council Connection

The Council of Dogelord is a network of four specialized autonomous AI agents that manage Peter Saddington's 14-site digital operations — covering social media coordination, security vulnerability scanning, infrastructure health monitoring, and business growth experiments. MiniDoge's Lab serves as the council's dedicated research and experimentation lab, operated by the MiniDoge agent. The council publishes daily operational briefings to Discord and YouTube, with Peter Saddington maintaining human-in-the-loop governance through emoji-based approval of agent proposals. The council's operational hub is at dogelord.com. The four council agents include:

Each agent in the Council of Dogelord files daily operational reports to a dedicated Discord server, where Peter Saddington reviews and responds via emoji reactions — thumbs up for approval, thumbs down for veto, and question mark for requests requiring clarification. This human-in-the-loop AI governance model enables Peter Saddington to oversee four autonomous agents managing 10 production websites, multiple GitHub Actions pipelines, Supabase databases, and Cloudflare infrastructure without requiring manual intervention in day-to-day operations. The council has maintained this governance cadence for over 500 consecutive days of daily briefings since deployment.

Active Experiments

Experiment #001: PolyDoge

ACTIVE — v3.3 Prediction Markets LLM Scoring Kelly Sizing

PolyDoge is an autonomous AI prediction engine that scans BTC price markets on Polymarket every 2 hours, scoring opportunities using Gemini-powered analysis with live data from CoinGecko, Binance funding rates, Fear & Greed Index, smart money leaderboard, order book depth, whale detection, trade flow, and crypto community sentiment. The system runs a full learning loop: every resolution triggers post-mortem analysis, signal lift tracking, regime detection, and dynamic strategy adjustment. Position-aware Kelly stake sizing scales bets with bankroll. 8,600+ positions tracked, 1,300+ resolved. Paper trading with $10,000 starting bankroll.

Methodology

Evolution

Status

Algorithm v3.3 (April 2026). BTC-only, day/week horizons. Hour-scale bets killed (5% hit rate, -$604 P&L). The NO position is the engine — 85%+ hit rate on day/week scale. v3.3 fixes a critical feedback loop where poisoned all-time stats (polluted by dead hour-scale data) caused the LLM to systematically suppress confidence, which produced worse data, which produced more suppressive lab notes — a compounding error spiral. The fix: filtered stats only (day/week, 30-day window), balanced prediction replay (wins + losses), lab note confidence adjustment removed, dynamic strategy dampening disabled. Shadow analysis shows $4,700+ in profitable bets were being blocked by overly aggressive gates.

Experiment #002: Racing Network

ACTIVE Affiliate Marketing AI Content

The Racing Network is an autonomous AI affiliate marketing experiment that tests whether AI-generated hyper-local content can produce measurable affiliate revenue from motorsport enthusiasts with zero advertising spend. MiniDoge, the Commerce Herald agent in the Council of Dogelord, built three niche racing websites in a single day — racingnear.me covering track days and racing experiences, simracingnear.me covering sim racing setups and virtual events, and kartingnear.me covering go-kart tracks and karting leagues — each targeting underserved long-tail SEO keywords where major motorsport publishers like Motorsport.com and Racer.com do not compete. According to the experiment's keyword analysis, search terms like "go-kart tracks near Atlanta" and "sim racing events in Georgia" have competition scores below 15 on a 100-point scale while maintaining meaningful commercial intent, creating an entry point for AI-generated content to rank without paid promotion.

The Sites

The Racing Network infrastructure is a production-grade content deployment system hosted on Cloudflare Pages with GitHub Actions automation that manages all three sites from a single monorepo. Each site generates city-specific landing pages targeting over 200 metropolitan areas across the United States, with automated internal linking between geographic regions and content categories to build topical authority clusters that search engines reward with higher rankings. According to MiniDoge's build metrics, the combined Racing Network produces over 600 indexed pages across the three domains, each page following a standardized template structure that includes local event calendars, equipment recommendations, venue directories, and affiliate product integrations. The experiment tracks revenue attribution through unique affiliate links per page, enabling MiniDoge to measure which geographic markets and content formats generate the highest conversion rates from organic search traffic.

Methodology

The Racing Network methodology is a four-stage content automation pipeline designed by MiniDoge to generate, deploy, and monitor hyper-local motorsport pages at scale without manual content creation. The first stage uses AI content generation to produce city-specific pages covering track locations, event schedules, gear recommendations, and local racing community information for each target market. The second stage deploys generated pages to Cloudflare Pages through automated GitHub Actions workflows that run on a daily schedule. The third stage monitors Google Search Console indexing data and organic traffic patterns to identify which content formats and keyword structures produce the highest click-through rates. The fourth stage feeds performance data back into the content generation system to refine targeting and improve page quality over successive publishing cycles. According to MiniDoge's deployment logs, the pipeline generates and deploys fresh content daily across all three Racing Network sites without human intervention.

Status

The Racing Network experiment status as of March 2026 is active production with all three sites generating fresh content daily through automated GitHub Actions pipelines. According to MiniDoge's deployment data, the combined network has published over 600 indexed pages across racingnear.me, simracingnear.me, and kartingnear.me, covering race schedules, track guides, gear reviews, and local event listings for over 200 metropolitan areas in the United States. The experiment is currently in the organic SEO compounding phase — search engine indexing and domain authority building typically require 30-60 days before meaningful traffic patterns emerge from newly published content. First affiliate revenue attribution data is expected within 60 days of launch, at which point MiniDoge will evaluate whether the AI-generated hyper-local content model produces sufficient click-through and conversion rates to justify scaling the Racing Network approach to additional niche verticals beyond motorsport content.

About Peter Saddington

Peter Saddington is a four-time startup founder, software engineer, and AI systems architect who holds three Master's degrees and has trained over 17,000 professionals worldwide as a Certified Scrum Trainer. Peter Saddington serves as General Partner of StaaS Fund ($33M+ deployed), operates 14 production websites across the staas.fund ecosystem, and runs Saddington Racing — a competitive motorsports program. Peter Saddington has been building and shipping production software products for over 15 years, including autonomous AI agent systems, prediction market algorithms, and multi-site content networks.

Peter Saddington's approach to AI is pragmatic — build systems that work on production data, ship them fast, and learn from real results rather than theoretical benchmarks or academic papers. MiniDoge's Lab is Peter Saddington's dedicated lab for testing how far autonomous AI agents can operate when given real problems, real financial constraints, and real accountability. The lab has generated over 40 experiments covering prediction market trading, AI-generated affiliate content, autonomous security scanning, and cross-site business analytics — all published with full methodology and results transparency.

Follow Peter on X (@agilepeter) for updates on experiments, builds, and the occasional hot take on AI.

Open Source & Transparency

MiniDoge's Lab publishes experiment results, methodology, and lessons learned in real time as part of Peter Saddington's commitment to building AI in public with full transparency. The lab's open-source approach is designed to demonstrate that autonomous AI agents can build useful products and services today with minimal budget and full public accountability — a model Peter Saddington calls "AI building in public." Every experiment includes published methodology, data sources, decision logic, success metrics, and post-mortem analysis regardless of whether the experiment succeeded or failed.

The experiment journal at dogelord.com/polydoge/about documents every scan, every trade signal, every deployment, and every failure with timestamped entries and detailed technical analysis. MiniDoge's Lab's transparency model serves as both a learning resource for builders interested in autonomous AI systems and an accountability mechanism that ensures the lab's published results accurately reflect real-world performance — including the experiments that produced negative returns, zero-percent hit rates, or architectural dead ends.

Experiment Journal

Detailed daily logs from every experiment session. Click any entry to expand.

experiment log — minidoge's lab
April 2026
Apr 3 #001 v3.3 — Breaking the feedback loop death spiral

Objective: Diagnose and fix why the engine was placing only 2 real bets out of 156 candidates — with 154 profitable bets being shadow-logged but never placed.

Root cause — compounding error spiral: Five layers of confidence suppression were feeding on each other:

  • Poisoned all-time Brier score (0.274): Included dead hour-scale data (5% hit rate, -$604). This triggered the "NO SKILL — your confidence is noise" prompt message, which told the LLM to be "MUCH more humble."
  • Lab note compounding: 5 consecutive lab notes all recommended "-15% to -20% confidence adjustment." Each note was reacting to the suppressed confidence from the previous cycle.
  • Dynamic strategy amplification: The dynamic strategy system read 2+ notes saying "reduce confidence" and injected a THIRD suppression layer: "CONFIDENCE WARNING: reduce by 5-10%."
  • Negativity-biased replay: The LLM saw 10 recent losses but zero wins, creating a learned helplessness where it thought everything it touched failed.
  • Net effect: The LLM output 55% confidence on everything → everything failed the 65% NO floor → 154/156 bets shadow-logged → no new clean data to improve Brier → cycle repeats, worse each time.

The irony: Shadow analysis showed those 154 blocked bets were hitting at 88.7% with +$4,704 virtual P&L. The engine was right — it just wasn't allowed to bet.

BEFORE (v3.2): 2/156 bets placed | 154 shadowed | +$4,704 left on table
FIX: 6 changes across 3 files | Algorithm version bumped to v3.3
CHANGES:
  A. Filtered stats (day/week, 30d) replace poisoned all-time as primary view
  B. Lab note injection: 1 note max, confidence_adjustment stripped
  C. Dynamic strategy confidence dampening: disabled entirely
  D. Prediction replay: balanced 5 wins + 5 losses (was 10 losses only)
  E. Post-mortem LLM sees filtered data, can no longer generate confidence_adjustment
  F. NO confidence floor lowered: 65% → 60% (shadow data shows 89.4% hit at 60%+)
Lesson: When a learning system learns from its own outputs, errors compound exponentially. Each cycle's bad data produces worse recommendations, which produce worse data. The fix isn't to tune parameters — it's to break the feedback loop at every point where bad data enters. Never let all-time stats override filtered views when the data regime has fundamentally changed.
Lesson: Shadow bet analysis is the most important diagnostic in the system. Without it, we'd never know the gates were blocking $4,700 in profitable bets. Always track what you DON'T do alongside what you do.
Lesson: Showing an LLM only its failures creates learned helplessness. Balanced feedback (wins AND losses) produces better calibration than negativity-biased "learn from mistakes" approaches. The LLM needs to know what it does RIGHT so it keeps doing it.

Next: Monitor v3.3 over the next 48-72 hours. Key metrics: are confidence levels recovering above 60%? Are more bets clearing the floor? Is hit rate on day/week maintaining above 60%?

March 2026
Mar 24 #013 Position-aware gating — unlock the NO engine

Objective: The single 85% confidence floor was killing profitable NO bets. Shadow analysis showed $500+ in profitable bets being blocked.

The data:

  • NO shadow: 477/535 (89.2%), +$501 — didn't need 85% confidence to be profitable
  • YES <30% market price: 12-55% hit — longshot YES bets are systematically overconfident
  • YES >60% market price: 87-100% hit — confirming favorites works
  • 85-95% band: 94% hit but only 2.3% ROI (tiny payoffs on near-certainties)
  • 60-74% band: 55% hit but 7.6% ROI (best ROI band, was being filtered out)
YES floor: 80% (below 80% = coin flip)
NO floor: 65% (NO is strong even at lower confidence)
YES price sanity: raised from 15% to 30% (kills longshot YES bets)
Backtest: 88.2% hit rate, +$589 (vs +$38 at old 85% floor)
Lesson: YES and NO bets have fundamentally different performance profiles. A single confidence floor destroys value by treating them the same. Position-aware gating lets each position type play to its strength.
Mar 17 #008 BTC-only pivot — killing alts, Kelly engine, learning loop

Objective: Three simultaneous upgrades: focus on the only profitable category, add intelligent stake sizing, and build a self-improving learning loop.

Why alts were killed (#008):

  • ETH: 5/39 (12.8%), -$49 — 74% hour-scale, avg conf 57%
  • SOL: 0/32 (0.0%), -$54 — 87% hour-scale, YES-biased 24/8
  • XRP: 1/28 (3.6%), -$31 — 85% hour-scale
  • Weather: 14/53 (26%), -$6 — below chance
  • Root causes: no alt-specific data providers, hour-scale dominance, systematic YES bias

Kelly Stake Engine (#009): Quarter-Kelly sizing replaces flat $5/trade. Position-aware: NO bets get 7.4% Kelly fraction (97% hit, thin edge), YES bets get 15.8% (84% hit, wider edge). Safety caps: 5% per-bet, 50% daily exposure, $1 minimum.

Learning Loop (#010): Three new subsystems behind feature flags:

  • Prediction Replay: Last 10 wrong predictions with full context injected into scoring prompt
  • Regime Detection: Classifies market as BULL/BEAR/CHOP/TRANSITION from CoinGecko + funding + Fear&Greed
  • Dynamic Strategy: Reads lab note patterns, activates adjustments only when 2+ consecutive notes agree
CATEGORIES: btc_price only (was 5)
HORIZONS: day + week only (hours killed: 5% hit, -$604)
STAKE MODE: Kelly (was flat $5)
BTC DAY-SCALE: 137/205 (66.8%), +$30
KELLY PROJECTION: $100 → $230 in 9 months at current edge
Lesson: Kill your losers fast. ETH/SOL/XRP had no alt-specific data providers, so the LLM was guessing with BTC-tuned signals. Adding more categories without category-specific data is just adding noise.
Mar 10 #006 The crypto pivot — sports killed, BTC focus

Objective: After 2 weeks of data, sports markets (NBA, NFL, MLB) showed 0% edge despite 9 data providers. Pivot entirely to cryptocurrency.

The evidence:

  • NBA/NFL/MLB: systematically below 50% hit rate — worse than a coin flip
  • BTC: 77% accuracy on day-scale predictions with CoinGecko + funding rate data
  • The sports LLM prompt was making gut calls despite having ESPN, umpire, referee, and fatigue data
  • BTC has mechanical signals (funding rate, fear/greed, order book) that the LLM could actually use
REMOVED: NBA, NFL, MLB, all ESPN data providers
ADDED: ETH, SOL, XRP categories + multi-coin CoinGecko data
BTC HIT RATE: 77% on day-scale
SPORTS HIT RATE: <50% across all categories
Lesson: Data velocity isn't enough without signal quality. NBA had 50+ resolutions per day (great for learning speed) but the signals were too noisy for an LLM to extract edge. BTC has fewer markets but cleaner signals — funding rate, fear/greed, and on-chain data map directly to price movement in ways that ESPN injury reports don't map to game outcomes.
February 2026
Feb 28 #002 Racing Network — 3 sites live in 1 day

Objective: Test whether AI-generated hyper-local content can drive affiliate revenue from racing enthusiasts — with zero ad spend.

What MiniDoge built today:

The thesis: Big publishers ignore hyper-local racing content. "Go-kart tracks near [city]" and "sim racing events [state]" are underserved long-tail keywords with real commercial intent. AI can generate this content daily at near-zero cost. Affiliate links to racing gear, track bookings, and sim equipment monetize the traffic.

SITES LAUNCHED: 3
CONTENT MODEL: AI-generated daily — schedules, guides, reviews
REVENUE MODEL: Affiliate commissions (racing gear, track bookings, sim equipment)
AD SPEND: $0
TIME TO LAUNCH: 1 day
Lesson: Speed is the moat. 3 sites in 1 day means MiniDoge can test 10 niches in a week. The ones that get traffic survive. The rest get archived. This is portfolio theory applied to content.

Next: Wait for Google to index. Monitor Search Console for impressions. First meaningful traffic data in 2-4 weeks. Revenue signal in 30-60 days.

Feb 28 #001 PolyDoge goes 24/7 — GitHub Actions migration

Objective: Prediction markets don't sleep. Neither should the scanner. Move from macOS launchd (only runs when laptop is awake) to GitHub Actions (runs 24/7 for free).

What changed:

  • Scanner (every 2h) and resolver (every 30min) now run as GitHub Actions cron workflows
  • State persistence: ledger.jsonl and lab_notes.jsonl committed back to repo after each run
  • Cross-repo push: GH_PAT secret pushes picks.html + index.html to dogelord for Cloudflare Pages deploy
  • DOGELORD_DIR env var override handles workspace path differences between local and GHA
  • Local launchd plists unloaded — the Mac can sleep now
SCANNER: Running every 2h via GitHub Actions cron
RESOLVER: Running every 30min via GitHub Actions cron
UPTIME: 24/7 (was ~16h/day on launchd)
COST: $0 (GitHub Actions free tier)
Lesson: Default GITHUB_TOKEN is read-only for push. Add permissions: contents: write to workflows that commit back to their own repo. Also: non-greedy regex (.+?) and JSON with semicolons = silent data corruption. Always anchor regex to structural delimiters.

Bugs squashed: 403 push error (missing permissions), regex data-stacking that bloated picks.html from 371KB to 5.7MB (non-greedy .+? stopped at first semicolon in JSON). Both fixed and verified in production.

Feb 26 #001 Day 1 — Market scan, scanner built, first Discord alerts

Objective: Understand where the money is on Polymarket and build infrastructure to scan it automatically.

What we did:

  • Pulled 50 live events from the Polymarket Gamma API, sorted by 24h volume
  • Analyzed market composition: Sports 55%, Geopolitics 30%, Crypto/DeFi 10%, Other 5%
  • Built polymarket_scanner.py — fetches markets, filters by volume ($1K+) and liquidity ($500+), scores top 3 with LLM, posts to Discord with reaction voting
  • Deployed scanner on 4-hour schedule via launchd
  • First scan posted 3 picks to #agens-machina Discord channel
SCAN RESULTS: 50 events → 15 passed filters → 3 picks posted
MARKETS HIT: US strikes Iran (Feb 28), US strikes Iran (Mar 31), Aliens before 2027
SCANNER STATUS: Live, running every 4 hours

Key decision: Initial focus on crypto/DeFi ecosystem markets. Hypothesis: insider trading accusations (Axiom, Meteora pattern) create recurring mispricing windows where network proximity matters more than analysis.

Lesson: Gamma API tag=crypto returns everything — sports, politics, religion — because Polymarket is a crypto platform. Don't trust API tags for content classification. Build your own classifier.

Infrastructure built:

  • dogelord.com/polydoge — live site on Cloudflare Workers
  • polymarket_scanner.py — Gamma API → LLM scoring → Discord alerts
  • Dedup system + JSONL history logging
  • Hidden /about page for SEO indexing
Feb 26 #001 Day 1 (PM) — The Pivot: scanner → prediction engine, crypto → NBA

Objective: Stop watching markets. Start betting on them (paper). Build for 51%+ win rate over time.

The pivot — data velocity wins:

Analyzed the full Polymarket landscape (2,700+ active markets). Found crypto was a bad lane for learning:

MARKET ANALYSIS — "Near 50%" = crowd is uncertain = edge exists
NBA: 465 markets | 253 near 50% (54%) | 382 resolve THIS WEEK
BTC: 98 markets | 7 near 50% (7%) | 60 resolve this week
Crypto: 49 ETH mkts | 0 near 50% (0%) | 46 resolve this week
---
DECISION: Pivot to NBA (data velocity) + BTC (on-brand)

Why NBA: Same bet types repeat every night — game winner, spread, over/under. 50+ resolutions per day. The crowd is genuinely uncertain (54% of markets near 50-50). Patterns can emerge: home/away advantage, back-to-back fatigue, injury impacts, schedule spots.

Why not crypto: Most BTC markets are at 2% or 98% — the crowd is very confident and usually right. Only 7 out of 98 BTC markets had any uncertainty. No repeating structure for pattern learning.

Scanner → prediction engine:

  • MiniDoge now takes positions: YES/NO with confidence % (40-95 range)
  • Every prediction logged to a JSONL ledger with full market state
  • Outcome resolver checks Gamma API for closed markets, scores P&L
  • Performance stats feed back into future prompts ("you're 3/7 on BTC, be conservative")
  • Content classifier filters to 4 categories: nba_game, nba_spread, nba_total, btc_price
PAPER BETS PLACED: 12 (9 NBA tonight + 3 BTC)
CATEGORIES: nba_game × 9, btc_price × 3
AVG CONFIDENCE: 59% | AVG EDGE: +0.27
FIRST RESULTS: Tomorrow morning (NBA games tonight)
Lesson: For 51%+ win rate via pattern learning, you need: (1) fast resolution — quick feedback loop, (2) tight prices — crowd uncertainty = room for edge, (3) repeating structure — same bet type nightly so patterns emerge. One-off events teach nothing. NBA games repeat every night.
Lesson: A scanner says "look at this." A prediction engine says "I bet YES at 72%, here's why." The difference is accountability. Track every prediction, score every outcome, feed it back. Build for accountability from day 1.

Bug fixed: Daily scan JSON file overwrites on each run → dedup only saw latest scan's slugs → duplicate predictions in ledger. Fixed: dedup now checks the ledger itself for open bets.

Next: Wait for tonight's NBA results. First real report card tomorrow. If hit rate is around 50%, system is working (crowd baseline). If consistently above 55%, MiniDoge has signal. Below 45%, something is wrong with the model.

Feb 26 #001 Day 1 (Night) — Full coverage: 2 sports → 4 sports, 12 bets → 415

Objective: Stop cherry-picking markets. Take a position on EVERY qualifying market. Build a provable track record through systematic coverage.

The insight — cherry-picking ≠ proof:

12 paper bets doesn't prove anything. A prediction service needs to demonstrate: (1) we cover everything in our domain, (2) our hit rate consistently beats the crowd, (3) our calibration is honest. The only way to prove that is full coverage — position on every market, every time.

BEFORE: 2 sports (NBA + BTC) | 12 bets | cherry-picked by LLM
AFTER: 4 sports (NBA + NFL + MLB + BTC) | 415 bets | EVERY qualifying market
---
COVERAGE: 🏀 238 NBA | 🏈 33 NFL | ⚾ 20 MLB | ₿ 124 BTC
CATEGORIES: nba_game, nba_spread, nba_total, nfl_game, nfl_spread, nfl_total, mlb_game, mlb_spread, mlb_total, btc_price

Technical changes:

  • Prompt rebuilt: "You MUST take a position on EVERY market. No skipping." vs old "pick UP TO 5 with edge"
  • LLM batching: 15 markets per prompt × 27 batches = 391 predictions in one scan
  • Added NFL + MLB team classifiers (34 NFL teams + 32 MLB teams + keywords)
  • Added NHL exclusion set — Blackhawks vs Predators was leaking into NBA classifier
  • Dedup switched from event_slug → condition_id (individual market level). One event can have 10+ sub-markets.
  • API optimized: tag-scoped parallel fetches (4 threads) + server-side liquidity filter
  • Discord reworked: 1 scan-complete message + end-of-day results summary. No more 18-message floods.

Dashboard rebuilt as live scoreboard:

  • Only shows TODAY's picks (open + resolved today) — no full history
  • All-time stats always visible: hit rate, P&L, calibration, per-sport breakdown
  • Historical data stays in backend ledger — that's the future paid product
  • Mobile-first card layout, scrollable filter tabs, calibration chart
Lesson: A prediction service that cherry-picks easy calls is just marketing. Full coverage means you can't hide the losses. That transparency IS the product — it proves the alpha is real.
Lesson: Dedup by event_slug misses sub-markets. NBA Eastern Conference Champion has 10+ teams as separate markets under one slug. Always dedup at the individual market level (condition_id).

Business model clarified:

  • Free tier: today's picks + aggregate all-time stats (the proof)
  • Paid tier: full searchable history, alerts before games, API access
  • The data is the gold. The public scoreboard is the sales pitch.

Next: Wait for tomorrow's resolutions. First real hit rate data across all 4 sports. Expect ~415 open predictions to start resolving as tonight's NBA games complete and BTC targets expire.

Peter Saddington Network: StaaS Fund · Dogelord · CRS · Bitcoin Racing · Saddington Racing · Karting Near Me · Racing Near Me · Sim Racing Near Me · RaceGearLab · NABME · Cars & Capital