R-Link Studio Rebuild: Handoff
This document hands the rebuild back to the internal R-Link dev team. It is delta-focused: every section leads with what is new vs the previous infrastructure you already know. It is reference material, not a tour. Section 8 is the most important: it captures four areas where a heads-up will save you time.
TL;DR · The New Stack
client/ + server/. Was 3 separate repos in R-Link-LLC org.server/** changes on main. Don't run railway up manually.r-link-studio-redux, config dev. No .env files anywhere./external/v1/* for rooms + users, gated by programmatic API keys.SHARED_ACCOUNT_MODELS for team data. Cutover 2026-04-24, see §8.4.Repo Layout
▶Three repos in the R-Link-LLC org are consolidated into a single monorepo here.
| Previously | Now |
|---|---|
R-Link-LLC/r-link-studioa-web (frontend) | mrchevyceleb/r-link-studio-rebuild/client/ |
R-Link-LLC/rlink-backend + r-link-studio-api | mrchevyceleb/r-link-studio-rebuild/server/ |
r-link-studio-rebuild/
client/ # React 18 + Vite 6, Kim's Base44 UI
server/ # Express 4 + MongoDB + Socket.IO
docs/ # kim-parity-checklist, gemini-api-ref
.github/workflows/ # Vercel deploy only (server is wired separately to Railway)
package.json # root, runs both packages via concurrently
HANDOFF.md # the canonical handoff
CLAUDE.md # agent-facing working notes
- No yarn workspaces. Each package owns its own
node_modulesand dev script. The rootpackage.jsononly orchestrates. client/README.mdandserver/README.mdare starter-template boilerplate from before the merge. The handoff doc here is the source of truth — replace those READMEs with a one-liner pointing here whenever convenient.
Local Dev Setup
▶Prereqs: Node 20, Doppler CLI logged in, MongoDB connection string (DigitalOcean managed cluster, lives in Doppler).
npm install
cd client && npm install && cd ..
cd server && npm install && cd ..
npm run dev
npm run dev uses concurrently to start both packages in parallel. Each package's own dev script wraps the command in doppler run, so Doppler injects every secret at process start.
Ports
- Client: Vite at
http://localhost:5173 - Server: whatever Doppler's
PORTis (currently3051)
There are no .env files anywhere. Don't create them. Don't commit them. Doppler is the single source of truth.
Doppler & Secrets NEW
▶This is new for you. Read it once, then it is invisible.
What Doppler does
At process start, doppler run fetches the active config's secrets from Doppler's API and injects them into the child process as env vars. The child sees process.env.GOOGLE_API_KEY exactly as if it were in a .env file, but no file ever existed on disk.
Project + configs
- Project:
r-link-studio-redux - Config:
dev(default for local development) - Staging/prod overrides: set directly in Railway and Vercel dashboards (e.g.
PORT,BASE_URL,API_URL,USER_PORTAL_URL,NODE_ENV)
Where it is wired
// client/package.json line 7
"dev": "doppler run --project r-link-studio-redux --config dev -- vite"
// server/package.json line 9
"dev": "doppler run --project r-link-studio-redux --config dev -- nodemon server.js"
- Vercel: pulls secrets via the Doppler integration during build
- Railway: pulls Doppler dev config, with the per-environment overrides set in Railway
To get access: ask Matt to add your Doppler email to the workspace.
Reload behavior
Most code reads env vars at module load time, which means Doppler edits require a process restart. server/helpers/gemini.js is the exception, it reads GOOGLE_API_KEY at call time, so a Doppler reload is enough.
Naming rule
Carryover from Matt's other projects: never use the literal word supabase in any secret name, it blocks Vercel's Doppler sync. Use SB_ as a prefix. Low risk in this repo (no Supabase usage), but worth knowing.
Deployment
▶Frontend (Vercel)
You know Vercel. Same flow:
- GitHub Actions workflow at
.github/workflows/deploy.yml - Triggers on PR (preview URL) and push to
main(prod) - Node 20, builds
client/, output isclient/dist - Required GitHub Secrets:
VERCEL_TOKEN,VERCEL_ORG_ID,VERCEL_PROJECT_ID vercel.jsonat the repo root sets the framework to Vite and adds an SPA rewrite so React Router routes resolve on hard refresh
Staging URLs
Backend (Railway, NEW behavior)
This is the single biggest operational change to internalize.
Railway auto-deploys the server on every push to main that touches server/**. The repo is wired to Railway with:
- Root directory:
/server - Watch paths:
/server/** - Build:
server/Dockerfile - Build/healthcheck config:
server/railway.toml
Client-only pushes do NOT trigger a server rebuild. That is deliberate.
⚠ Do NOT run railway up manually
If you do, on top of the auto-deploy, Railway queues a second build and you get container cycle storms (Starting/Stopping loops in logs) that can interrupt live recordings mid-session. Matt got bitten by this. Just git push and let Railway pick it up. Verify in Railway dashboard, Deployments tab.
Fallback only if auto-deploy is actually broken: copy server/ to a temp dir, railway link, railway up --detach from there.
Staging API: r-link-studio-api-staging-production.up.railway.app
Architecture & Conventions
▶Two patterns drive most of the new code. Internalize these and the rest of the rebuild reads naturally.
client/src/api/apiClient.js · the Base44 compatibility layer
Kim's UI was built against Base44's SDK (@base44/sdk). Rather than rewrite 530+ files, apiClient.js is a drop-in replacement that exposes the same surface and translates calls into REST hits against our Express server.
Components import base44 and use:
base44.entities.X.list(sortField, limit, offset)returns an arraybase44.entities.X.filter(conditions, sortField, limit, offset)base44.entities.X.get(id)/.create(data)/.update(id, data)/.delete(id)base44.entities.X.subscribe()is a no-op (REST, no realtime here)base44.auth.me()/updateMe()/getDevicePreferences()/updateDevicePreferences()/logout()base44.functions.invoke(name, params)and the shorthandbase44.functions.SomeFunction(params)route to/functions/{name}base44.integrations.Core.InvokeLLM/SendEmail/SendSMS/UploadFile/GenerateImageroute to/integrations/*
Field aliasing is automatic, both directions
id↔_idcreatedAt↔created_at↔created_dateupdatedAt↔updated_at↔updated_datename↔full_name
You can write new code in either casing and it will work.
Special-case proxies override the generic flow for User, LeadSubmission, and AuditLog, routing them to dedicated admin/leads endpoints instead of the generic entity router.
server/routes/entityRoutes.js · generic CRUD with tenant scoping
One file backs base44.entities.X.* for ~100 Mongoose models. There are four lists at the top that govern behavior:
| List | Members | Effect |
|---|---|---|
BLOCKED_MODELS |
User, Settings, AuditLog, EmailLog, Meeting |
Generic router refuses these. Use dedicated admin routes. |
PUBLIC_READ_MODELS |
Event, SharedClip, SharedPresentation, WebinarRegistration, EventShareLink, Room |
GET requires no auth. Writes still require auth. |
SHARED_ACCOUNT_MODELS |
Element, ElementFolder, EventFolder |
Account-scoped. Visible to all users on the same account_id. Stamped on create. |
| Default | Everything else with a userId field |
User-scoped. Filter by userId: req.user.id on list and get. |
Protected fields are silently rejected on write: _id, __v, createdAt, updatedAt, password, isAdmin, role, is_global_default, user_id.
NoSQL injection prevention: filter keys/values starting with $ are rejected. Aliases resolved against schema. ObjectIds cast.
Pagination: limit defaults to 100, capped at 500. Offset non-negative.
Element special case: Element data is stored as a JSON string in settings. prepareElementBody serializes on write, serializeElementDoc parses on read. Lets partial updates work without overwriting other element data.
To add a new entity: define the Mongoose model, give it a userId field (and account_id if shared), generic router handles CRUD. No route file needed unless you want custom behavior.
Other conventions
server/routes/functionsRoutes.jsreplaces Base44 serverless functions. Dispatched from/functions/:functionName.server/routes/integrationsRoutes.jsreplaces Base44 integrations (LLM, email, SMS, file upload, image generation).server/server.jsmounts every route file. 29 route files total.- 112 Mongoose models in
server/models/, organized by domain.
Big Infrastructure Changes
▶AI: now Gemini (was OpenAI)
Single entry point: server/helpers/gemini.js.
callGemini(messages, options)accepts OpenAI-style messages and converts to Gemini format internally. Auto-extracts the system role intosystemInstruction.- Models:
gemini-3-flash-preview(text),gemini-3.1-flash-image-preview(image),gemini-3.1-flash-live-preview(Live API) - Env:
GOOGLE_API_KEY. Read at call time so a Doppler reload picks up rotations without a server restart. json_mode: truesetsresponseMimeType: 'application/json'AND defaultsthinkingBudget: 0. See Section 8.- Safety filtering active:
promptFeedback.blockReasonand candidatefinishReason === 'SAFETY'both throw. - Find any leftover
callOpenAIreferences and migrate them.
Captions / Transcripts: Deepgram (NEW capability)
Live captions (server/routes/deepgramRoutes.js)
- Client posts to
/api/deepgram/ephemeral-token, gets a 10-minute JWT - Client connects directly to
wss://api.deepgram.com/v1/listen - Token mint via Deepgram
/auth/grant
Post-meeting (server/services/deepgramTranscribe.js)
transcribeRecordingUrl(url)andtranscribeRecordingToS3(...)- Model:
nova-3,smart_format,punctuate,diarize,utterances,language: 'multi' - JSONL output to S3 at
transcriptions/{meetingId}/{sessionId}/transcript.jsonl. One JSON per utterance.
Env: DEEPGRAM_API_KEY
Rate Limiting: Upstash (NEW)
server/middleware/rateLimiter.js uses @upstash/ratelimit + @upstash/redis.
Env: UPSTASH_REDIS_REST_URL, UPSTASH_REDIS_REST_TOKEN
| Limiter | Allowance | Key |
|---|---|---|
| loginRL | 10 / 15min | rl:login:v3 |
| recoveryRL | 5 / 15min | rl:recovery:v2 |
| registerRL | 5 / 1hr | rl:register |
| apiRL (global) | 3000 / 15min | rl:api:v2 |
| uploadRL | 30 / 15min | rl:upload |
| leadRL | 20 / 15min | rl:lead |
| aiRL | 60 / 15min | rl:ai |
| chatUploadRL | 10 / 15min | rl:chat-upload |
Exempt paths (skip global limit entirely)
/api/health/api/transcription-webhook- meeting call handlers
/api/agora/participant-info/api/auth/profile
Failure modes
- HTTP 429 with
Retry-Afterwhen limit exceeded - If Upstash creds are missing or service is down, falls back to in-memory
Map-based limiter (logs a warning). Production should always have Upstash configured.
Security Posture
▶These items make up the security baseline of the rebuild. They are load-bearing, please don't revert them without a clear reason.
Auth (server/controllers/authController.js)
JWT_SECRETrequired at boot. Token generation throws if missing.- JWT expiry: 30 days
- Min password length: 10 characters
- bcryptjs hashing (cost 10) for passwords AND password reset codes
- Login, recovery, register endpoints individually rate-limited via Upstash
Proxy (server/routes/proxyRoutes.js + server/config/trustedOrigins.js)
- Rejects URLs targeting localhost,
127.0.0.1, link-local169.254.x.x, RFC1918 ranges - DNS resolves the target and re-validates the IP (prevents DNS rebinding)
- Strips inline credentials from URLs
- Rebuilds CORS headers on the response (no upstream CORS leak)
frame-ancestorsCSP built fromtrustedOrigins.jswhitelist- Trusted origins:
beta.r-link.com,studio.r-link.com, localhost variants. Add new domains there only.
Uploads (server/routes/uploadRoutes.js)
- multer MIME-type filter on every endpoint
- Storage path segments sanitized to alphanumeric + dash only
- Presigned multipart S3 flow for files >100MB (bypasses Cloudflare's body limit)
- PPTX → PDF conversion via
convertPresentationToPdf()
Headers / Middleware
- Helmet for security headers
- CORS locked to trusted origins only
- Cookie parser for secure session cookies
State of the World ⚠ READ
▶Four areas where a heads-up will save you time, plus one open work item.
8.1 Agora RTM Presence is NOT on our subscription plan
Symptom if you "fix" it wrong: infinite RTM login → subscribe-fail → disconnect → reconnect loop, error -13001 "Presence service not connected".
Reason: agora-rtm v2 SDK defaults withPresence: true on client.subscribe(). Our Agora Signaling tier does NOT include the Presence add-on. The Agora console's Presence Configuration panel shows config values (looks "on") but the right-side Subscription card has an Upgrade button. That's the tell.
Current fix (commit d9f6848): client/src/contexts/AgoraRTMContext.jsx passes explicit withPresence: false, withMetadata: false, withLock: false on subscribe. Chat works. Names/roles/join/leave come from the Socket.IO MeetingState snapshot, NOT from RTM.
Action: do NOT flip withPresence back to true unless the Agora Signaling subscription is upgraded first.
8.2 Gemini thinking-token tax on JSON responses
Symptom: small max_tokens budgets (e.g. 500) on json_mode: true calls return truncated or empty JSON.
Reason: gemini-3-flash-preview is a thinking model. Thought tokens count against maxOutputTokens. Gemini can burn the whole budget on internal reasoning and return finishReason: MAX_TOKENS with a few candidate tokens. thinkingLevel: "low" does NOT meaningfully reduce this. Only thinkingConfig.thinkingBudget: 0 reliably disables it.
Current fix: server/helpers/gemini.js defaults thinkingBudget: 0 whenever json_mode: true. Don't remove that default.
If you want reasoning + JSON: pass thinking_budget: -1 (dynamic) or an explicit positive budget.
8.3 Gemini Live constrained endpoint requires AUDIO modality
Symptom: Gemini Live session closes immediately with WebSocket code 1011 "Internal error" on setup.
Reason: the Live constrained endpoint (required when using ephemeral tokens on v1alpha) rejects responseModalities: ['TEXT'] on gemini-3.1-flash-live-preview. Only ['AUDIO'] is accepted, even for caption-only flows.
Why this is non-obvious: inputAudioTranscription: {} is independent of output modality, so transcription events still fire. You don't need TEXT mode to get text out.
Action: keep responseModalities: ['AUDIO'] in the Live setup payload. Verified empirically April 2026.
8.4 Entity router tenant scoping cutover (2026-04-24)
What changed: the rebuild updated entityRoutes.js to scope every entity request by userId and (where relevant) account_id. Previously the generic entity router only enforced this on BrandKit. The new model lines up cleanly with multi-account usage and prevents cross-account data bleed in shared studio sessions.
After: all userId-owned models scoped per-user. SHARED_ACCOUNT_MODELS scoped per-account.
Side effect: pre-2026-04-24 records have no userId/account_id and are now invisible. Users may need to recreate EventFolders, shared workspace docs. Treat any "where did my old folders go?" reports as expected fallout.
Schema rule: when adding a new userId-owned model that should be team-shared, add it to SHARED_ACCOUNT_MODELS AND add account_id to the schema. Router stamps it on create automatically.
8.5 Kim Parity Checklist still in progress
docs/kim-parity-checklist.md is the open parity tracker. Each feature gets one of: Working, Working with limitation, Broken, Misleading UI. Most items are unchecked.
This is the primary "is the rebuild ready to take over production?" gate. Worth driving to completion. Priority 1 covers the core admin and studio loop (Rooms, Schedule, Elements, Brand Kits, Team, Studio Launch). Start there.
Production Cutover
▶The rebuild currently runs on staging only. Current production is untouched.
| Current Production | Rebuild | |
|---|---|---|
| Frontend | r-link-studioa-web (R-Link-LLC org) | client/ in this monorepo, deployed via Vercel |
| Backend | r-link-studio-api (R-Link-LLC org) at studioapi.r-link.com, behind Cloudflare | server/ in this monorepo, deployed via Railway |
| Other backend | rlink-backend (R-Link-LLC org) | absorbed into server/ |
The mrchevyceleb GitHub account has read-only access to the R-Link-LLC org repos. You likely already have full access.
The cutover plan is your call. Matt is intentionally not making that decision in this handoff. Recommended gating: parity checklist green on Priority 1 + 2 from docs/kim-parity-checklist.md. After that: DNS prep, any data migration, flip, monitor, rollback.
Reference Material
▶CLAUDE.md(repo root) — agent-facing context, doubles as architecture reference. Mostly aligned with this doc; if anything diverges, this doc wins.docs/kim-parity-checklist.md— open parity tracker.docs/gemini-api-ref.md— Gemini API reference notes.client/README.mdandserver/README.md— starter-template boilerplate from before the merge. Replace with a one-liner pointing here whenever convenient.