CS 499 / Computer Science Capstone / Apr 2026

A production
system,
enhanced three times.

This ePortfolio follows one live artifact — the Fiyope feed microservice, running at fiyope.com — through three compounding enhancements across software design, algorithms and data structures, and databases. One system, three deliberate passes, and a clear record of why each decision was made.

Author Faruk Aydin
Artifact Fiyope · services/feed
Stack TypeScript · Node · PostgreSQL · Redis · NATS
Repository github.com/ofarukaydin/improved-feed-cs499
I
I · Professional Self-Assessment

A short introduction before the evidence.

Who I am, what I lead with as an engineer, and how the three enhancement artifacts that follow fit together as a coherent demonstration of the program's five course outcomes.

Read the full self-assessment ↗

I am Faruk Aydin, a software engineer completing a Bachelor of Science in Computer Science at Southern New Hampshire University. I spent the last several years at startups shipping production software, and this degree is the formal grounding behind that work — algorithms, data structures, disciplined software design, secure coding, and the practice of communicating technical decisions for review. This capstone and ePortfolio are where those two halves meet: production experience sharpened by academic rigor.

I chose to build the entire ePortfolio around Fiyope, a live social platform I designed and deployed at fiyope.com. The artifact across all three enhancement categories is the same piece of real software — the Fiyope feed microservice — which I enhanced in three distinct, compounding passes. Using a production artifact meant every decision had to be defensible under real constraints: a real message bus delivering events, a real Redis instance backing sorted-set pagination, a real PostgreSQL schema already migrated multiple times. The enhancements had to be correct, not just submittable.

The code-review conversation — not the code itself — is the deliverable.

Collaborating in a Team Environment

Collaboration has been the central engineering discipline at every startup I have worked at — not a procedural step, but the place where design actually gets decided. Code review in particular is where reasoning gets tested against reviewers who come at the problem from different angles: backend engineers worried about downstream load, infrastructure operators worried about failure modes, frontend consumers worried about contract stability. That pressure shapes how I write code. Interfaces become public contracts. Commits become arguments. Comments exist for the next person rather than for me.

The enhancements in this portfolio carry that posture forward. When I added transaction boundaries in Enhancement Three, I did not just wrap the handlers — I wrote comments distinguishing single-statement atomic operations from multi-step handlers, so the next engineer would not have to reconstruct the reasoning. The code-review video for Milestone One is the same idea in a different medium: a walkthrough built so another engineer can watch once, see what the feed service does and where it is weak, and act on the plan without a second meeting.

Communicating with Stakeholders

Not every audience is another engineer. Part of the reason I started Fiyope was to live with the full range of stakeholders a real product has: users, investors, operators, and myself at three in the morning debugging an incident. For technical audiences I lean on precise structural communication — ASCII architecture diagrams, complexity tables that name the operation and its Big-O, index inventories that pair every index with the query it accelerates. For non-technical audiences I lean on the why — what user-visible problem this solves, what the failure mode looks like when it goes wrong, what the trade-off costs. The algorithms narrative explains the cursor redesign both as "a two-command race condition between ZREVRANK and ZREVRANGEBYSCORE" and as "the bug where users saw the same post twice or skipped the first post of the next page."

Data Structures and Algorithms

Enhancement Two is the explicit DSA showcase, but algorithmic reasoning threads through every artifact. The program — CS 260 and CS 340 in particular — gave me the vocabulary to name the patterns I had been using intuitively, and to recognize the ones I was using poorly.

  • Set-based relevance scoring. Replaced Array.filter + Array.includes (O(n·m)) with a Set.has lookup (O(n+m)) — a three-line change whose impact compounds across every audience member during fan-out.
  • Score-based cursor pagination. Collapsed a non-atomic two-command sequence into a single atomic ZREVRANGEBYSCORE with an exclusive score bound, eliminating a concurrency bug that caused duplicate or skipped items between pages.
  • Bounded fan-out. Replaced unbounded pipeline amplification with a deferred DeferredFanOutRequestedEvent processed by a background worker — converting worst-case O(A·(t+i)) on the feed service to O(1) while guaranteeing eventual delivery.
  • GIN indexes for array overlap. Added GIN indexes on userEntity.interests and actionEntity.topics to support the && operator, replacing sequential scans with posting-list intersections.

Software Engineering and Databases

Enhancement One (software design) and Enhancement Three (databases) are where I demonstrate the structural thinking I believe separates an engineer from a coder. Enhancement One decomposed a 257-line monolithic FeedManagerService into four services with clear single responsibilities, a domain-specific error hierarchy, structured logging, and a 36-test vitest suite across four files. Enhancement Three moved down a layer to the PostgreSQL schema: GIN and composite B-tree indexes tied to specific queries, NOT NULL and CHECK constraints that encode the application's enums at the storage layer, a UNIQUE constraint on (followerEntity.src, followerEntity.dst) to defend against at-least-once event redelivery, reversible down-migrations for every migration file, and PostgreSQL transactions around multi-step event handlers using AsyncLocalStorage-based context propagation. I also found and fixed a subtle semantic bug in the getUserFeed subquery where the "followed-by-user" filter was effectively a no-op because the subquery was not scoped to the current user — a good reminder that performance work and correctness work are inseparable.

Security

Security was not an add-on — it was woven through each enhancement as a defense-in-depth posture rather than a checklist. CS 405 gave me the vocabulary to articulate this formally. Specific improvements: removal of credential-leaking debug logs that printed environment variables; typed validation at every external boundary (InvalidCursorError for malformed cursors, guarded new Ulid() on event-bus payloads, per-command pipeline.exec() result checking); CHECK and UNIQUE constraints as storage-layer defense against bad application state; a data-access-control bug fix in getUserFeed; and the fan-out threshold doubling as a denial-of-service amplification mitigation. None of these were retrofitted after the fact — they emerged from reading the code with the adversarial question in mind: what is the worst thing that could happen here, and what is the cheapest way to make it impossible?

Career Direction

My goal leaving this program is to use its foundation to start my own company. I spent most of my career so far building for other people's product visions, and the enhancements in this ePortfolio represent the kind of engineering I want to do next: deep, deliberate, and owned end-to-end. What follows — the code review, the three enhancement narratives, and the enhanced artifacts themselves — is the best evidence I can offer that I am ready to do it.

II
II · Artifact Overview

One platform, three proof points.

Fiyope's feed microservice is the single artifact, enhanced three times. Each enhancement exercises all five course outcomes — not just the one it is nominally about.

01 · Milestone 2 SWE

Feed service architecture

Decomposed a 257-line monolithic service into AudienceService, ScoreService, FeedStoreService, and a thin orchestrator. Typed error hierarchy, structured logging, 36-test vitest suite.

4 services 36 tests 5 error types
02 · Milestone 3 DSA

Ranking, pagination, fan-out

Set-based O(n+m) scoring, score-based cursor pagination with atomic ZREVRANGEBYSCORE, bounded fan-out via NATS delegation, interest deduplication, Big-O analysis for every workflow.

O(n·m)→O(n+m) 2→1 cmd O(1) celeb
03 · Milestone 4 DB

PostgreSQL data layer

GIN + composite B-tree indexes, NOT NULL/CHECK/UNIQUE constraints, AsyncLocalStorage transactions, reversible down migrations, and a getUserFeed subquery correctness fix.

6 indexes 11 constraints 3 txns
// ORIGINAL

The pre-enhancement feed service, preserved on a dedicated branch so the three enhancement diffs always resolve against the true baseline.

browse original
III
III · Informal Code Review

Walking the codebase before enhancing it.

A recorded walkthrough of the feed microservice as it existed before enhancement — existing functionality, targeted areas for improvement (structure, efficiency, security, testing, documentation), and the planned enhancement work mapped to each of the five course outcomes.

REC CS 499 · FIYOPE · FEED SERVICE · MILESTONE ONE youtu.be/K9mQ9aZ8QL4 ↗

Prefer to read? Read the full transcript ↗

IV
01
IV · Enhancement One · Software Design & Engineering

From monolith to composable services.

The original FeedManagerService worked — it was a 257-line file that handled audience resolution, scoring, Redis storage, pagination, rebuilds, and orchestration all at once. That was the problem. Enhancement One decomposes it along domain seams rather than technical layers.

Aligning services with domain concepts — audience resolution, scoring, feed storage — produced more natural interfaces. The orchestrator now reads almost like the Module One pseudocode.
01

Service decomposition into AudienceService, ScoreService, FeedStoreService, and a thin orchestrator — each with a single responsibility.

02

Hotness scoring wired in. Implemented the previously-commented Hacker-News-style gravity decay (log1p(relevance) / (ageHours + 2)^1.8) and blended with topic overlap (0.7·rel + 0.3·hot). Timestamp extracted from ULID — no schema change.

03

Typed error hierarchy (FeedError, FanOutError, FeedRebuildError, UserNotFoundError, InvalidCursorError) carrying structured context for production debugging.

04

Error-handling hardening across every await, new Ulid(), Base64.decode(), and pipeline.exec() — malformed events and pipeline failures are caught and logged instead of silently dropped.

05

Structured logging replacing console.log/console.error with the injected Logger — including removal of debug logs that leaked environment variables.

06

26-test vitest suite covering scoring edges, audience resolution, fan-out orchestration, rebuild, pagination delegation, and error paths.

07

Architecture documentation: ASCII data-flow diagrams, service-responsibility matrix, testing approach notes.

Outcome 3 Outcome 4 Outcome 5 Outcome 2
V
02
V · Enhancement Two · Algorithms & Data Structures

Where data-structure choice becomes the system.

A social feed is a ranking and retrieval problem. Its correctness and scalability live inside the choice of Sets, Maps, sorted sets, and pipelines. This enhancement makes those choices deliberate and measurable.

Score-based cursors eliminate a two-command race condition and a Redis round-trip. The trade-off — rare ties at exact page boundaries — is an explicit, documented design choice.
01

Set-based relevance scoring. Replaced Array.filter + Array.includes (O(t·i)) with a Set.has lookup (O(t + i)). For a 5,000-user audience with 20 interests vs. 10 topics, this goes from 1,000,000 comparisons to 150,000.

02

Score-based cursor pagination. Cursor encodes Base64(score:itemId). Subsequent pages use the score as an exclusive upper bound with ZREVRANGEBYSCORE — one atomic Redis command replacing the two-command sequence that produced duplicates or skips under concurrent fan-out.

03

Bounded fan-out for the celebrity problem. When audience size exceeds MAX_FANOUT_SIZE = 10,000, publish a DeferredFanOutRequestedEvent to NATS JetStream. A background worker processes the audience in controlled batches — feed-service work becomes O(1) while eventual delivery is guaranteed.

04

Pipeline batching. PIPELINE_BATCH_SIZE = 1,000 chunks fan-out using a functional Array.from chunking pattern, bounding peak pipeline memory and letting the event loop yield between batches.

05

Interest deduplication in audience resolution. A user matched by both UNION branches previously had duplicate interests concatenated, doubling their relevance score. new Set([...existing, ...incoming]) fixes the score-inflation bug in one line.

06

Typed cursor validation. decodeCursor() validates base64, separator, numeric score, and item ID — each failure throws InvalidCursorError instead of leaking an opaque decode failure.

07

36-test vitest suite (from 26), including a new 9-test FeedStoreService suite and fan-out threshold/batching verified across 2,500 users in 3 batches.

08

Big-O complexity documentation in the README for scoring (O(t + i)), fan-out (O(A · (t + i)) with O(1) celebrity short-circuit), retrieval (O(log N + P)), rebuild, and audience resolution — plus a data-structure rationale table.

Outcome 3 Outcome 4 Outcome 5 Outcome 2
VI
03
VI · Enhancement Three · Databases

Let the workload drive the schema.

The PostgreSQL layer beneath Enhancements One and Two had six specific weaknesses: no indexes, no constraints, no rollback, out-of-sync TypeScript types, a semantically broken subquery, and no transaction boundaries. This enhancement resolves each — with every change tied to a specific query, not a generic checklist.

Adding NOT NULL lets the application code get simpler — the schema now guarantees what the code previously had to handle defensively.
01

GIN indexes on userEntity.interests and actionEntity.topics — inverted indexes over array elements, replacing sequential scans under the && overlap operator with posting-list intersections.

02

B-tree + composite indexes. followerEntity.src, followerEntity.dst, actionEntity.ownerId, plus a composite (visibility, type, ownerId) index covering the three-column WHERE pattern used by both feed queries.

03

UNIQUE on (src, dst) — NATS JetStream's at-least-once delivery could previously create duplicate follower rows, inflating counts and biasing fan-out. The constraint stops it at the storage layer.

04

NOT NULL with array defaults on interests and topics — kills the silent &&-returns-NULL bug class, and lets the downstream COALESCE(..., ARRAY[]::text[]) be removed.

05

CHECK constraints on visibility, type, and status, encoding the TypeScript enums at the storage layer as defense-in-depth against bad application state.

06

Type / schema alignment in db.type.ts — dropped stale createdAt/updatedAt, swapped string for ActionableVisibility and UserStatus enums so the TypeScript compiler catches invalid values.

07

Fixed getUserFeed subquery bug. The original returned "users followed by anyone," not "users followed by the current user" — effectively a no-op filter. Replaced with a direct followerEntity query scoped to src = userId.

08

Transaction boundaries around AccountCreatedHandler, ActionableCreatedHandler, and UserInterestChangedHandler using KyselyService.transaction() with AsyncLocalStorage-based context propagation. Redis-outside-PG boundary documented explicitly.

09

Reversible down-migrations for every migration file, including the lossy topic-case normalization (documented comment).

10

Database architecture documentation in the README: ER diagram, index inventory, constraint summary, migration history, and transaction-boundary table.

Outcome 3 Outcome 4 Outcome 5 Outcome 2
VII
VII · Course Outcome Alignment

Five outcomes, three enhancements, one artifact.

Every enhancement exercises outcomes 2–5 directly through code, schema, and documentation. Outcome 1 — collaborative environments and stakeholder-ready communication — is met by the code review video, the written narratives, and this ePortfolio itself.

Course Outcome 01 — Software Design 02 — Algorithms & DS 03 — Databases
I. Build collaborative environments for diverse audiences Code review & README Complexity tables Architecture docs
II. Deliver professional-quality written / visual communication Service-responsibility matrix Big-O analysis section Index + constraint tables
III. Design solutions managing algorithmic trade-offs SRP + DI decomposition Set scoring, cursor redesign, bounded fan-out Query rewriting, composite indexes
IV. Use well-founded, innovative techniques and tools DIOD, vitest, typed errors, hotness decay NATS deferred events, pipeline batching GIN indexes, AsyncLocalStorage transactions
V. Security mindset anticipating adversarial exploits Credential-leak fix, input validation Typed cursor validation, DoS threshold CHECK/UNIQUE, data-access-control fix