NYPD Times Square precinct at night, showing a patrol car and the New York Police Dept signage
Back to Work
Product Case Study

Redesigning Crime Reporting with NYPD

Cornell Tech × Parsons School of Design Aug – Dec 2022 Product Management
12
User Interviews
3
Experiments Run
75%
Top Satisfaction
7
Competitors Mapped
HOW MIGHT WE

Improve crime victims' access to services, programs, and reports?

Problem

For crime victims in New York City, the reporting experience was a second failure. A 6-hour wait after calling 911. An online portal that created more friction than it solved. And after all that effort, silence.

The gap between what users needed and what the system delivered wasn't just a UX problem. It was a systemic breakdown across communication, response time, and technology that affected victims, officers, and public confidence in the process.

No communication

Victims received zero updates after filing. The feedback loop between police and victims was completely broken.

Unacceptable wait times

Hours to weeks at every stage, from the initial 911 call to in-person interviews to case resolution.

A technology gap

Officers and victims both lacked the tools to track progress, recover property, or move cases forward efficiently.

My Role

My role and contributions.

I led UX research and Figma prototyping, designed the chatbot's full interaction flow, ran Experiments 2 and 3, and presented findings directly to NYPD stakeholders. I also contributed to the Competitive Analysis across 7 players and contributed to the Value Chain Analysis for all three solution concepts.

Team
  • 5 members : Operations Research, Product Design, MBA/Product, Computer Science
  • Cross-institutional : Cornell Tech × Parsons School of Design
  • Client : NYPD, under sergeant supervision
PM / PMM Skill Where Demonstrated Evidence
Strategy
GTM Strategy Audience & positioning Defined user segments and value prop
Product Strategy Solution ideation, VCA 3 concepts evaluated, 1 selected
Competitive Analysis Landscape mapping 7 competitors across 3 tiers
Business Modeling Value chain analysis Demand/supply evaluation across 3 concepts
Research & Design
User Research Interviews, HMW decomposition 12 interviews, 137 min, 4 themes
UX Design Chatbot interaction flow design Mobile-first interface mapped to user pain points
Figma Prototyping Chatbot interface build Full chatbot flow with A/B comparison
Process Mapping Systems map, crime reporting flow Rich picture of the reporting ecosystem
Validation & Execution
Experiment Design Designing Experiments 2 & 3 Defined hypotheses, controls, and test conditions
A/B Testing Chatbot vs. phone (Exp. 2), chatbot vs. NYPD site (Exp. 3) Measured satisfaction and task completion time
Data-Driven Validation Experiments 2 & 3 Quantitative validation of chatbot vs. alternatives
Stakeholder Management NYPD collaboration Presented findings to NYPD sergeant
Discovery

Starting with the user, not the solution.

We conducted 12 interviews (137 minutes) with crime victims, police officers, and service providers. Four themes emerged: a communication gap after filing, an unintuitive NYPD website, wait times spanning hours to weeks, and victims left without tools to track progress.

Our systems map surfaced deeper bottlenecks: portal bugs that silently let bad data through, unused data from the Citizen app, and an investigation timeline measured in months.

Rich Picture: systems map of the crime reporting ecosystem
Landscape

Mapping the competitive landscape.

I mapped 7 competitors across three tiers. Incumbents (911, NYPD portal, Safe Horizon) were slow and analog. Digital challengers like Citizen ($73M+ raised, sub-10-second premium response) proved demand for faster solutions, but none were tackling report filing itself.

The gap: no one was automating the victim intake process.

I plotted competitors against two dimensions: how tech-enabled their solution was, and how directly it served victims during the filing process.

VICTIM-FACING →
TECH-ENABLED →
LOW
HIGH
HIGH
LOW
THE GAP
Automated, victim-facing
report filing
911 System
6-hour wait times
NYPD Portal
Buggy, unintuitive
Safe Horizon
Advocacy, not filing
Private Investigators
Dedicated but no scale
Patternizr
Internal NYPD ML tool
Citizen
$73M+ raised · Alerts, not filing
MD Ally
Non-emergency triage
Incumbents
Challengers
Digital challengers
Opportunity
Ideation

Narrowing from three concepts to one.

Each concept went through a Value Chain Analysis covering business model canvas, demand/supply evaluation, and feasibility assessment. The chatbot was selected as the only concept that addressed all three problems, was testable within the semester, and had the strongest user-need-to-feasibility alignment. The remaining concepts were documented as future phases.

Concept What it does Feasibility Decision
Report Eligibility Chatbot Automates intake and eligibility assessment High: testable, hits all 3 problems Selected
Speech-to-Text Converts victim speech to text for officers Medium: accent/language accuracy risk Future phase
Text2Report SMS filing with AI-formatted reports Medium: AI false results risk Future phase
Value Chain Analysis: Business Model Canvas for the Report Eligibility Chatbot

Value Chain Analysis: Business Model Canvas for the Report Eligibility Chatbot

Demand & Supply Evaluation
Demand Side

Victims: Easier reporting, increased accessibility

Police: More data from increased volume, reduced manual labor

Public: Greater awareness of neighborhood crime

Supply Side

Futr & EVVA: Chatbot-based victim reporting exists, but not in NYC

IBM Watson / Ada: NLP chatbot infrastructure proven at scale

Gap: No automated intake for NYPD specifically

Solution

The solution: a report eligibility chatbot.

01
Crime Classification
Guides victims through selecting a category with contextual info to self-assess eligibility.
02
Guided Intake
Conversational flow collecting crime details with real-time validation and reference codes.
03
Status Communication
Clear accepted/rejected outcome with actionable next steps.
Validation

Data over instinct: how we validated the chatbot.

We ran three structured experiments to validate different assumptions about the chatbot's viability.

Experiment 1 : ML Keyword Generation

The team's developers tested ML vs. human-annotated vs. statistical models for crime keyword matching. ML was ranked first 54% of the time and processed in under 2 seconds vs. 35+ for human annotation, validating the technical approach.

Experiment 1: ML Keyword Generation Results
Experiment 2 : Chatbot vs. Phone

I designed this experiment with a Parsons teammate comparing our chatbot prototype against phone-based reporting. Chatbot satisfaction: 75% rated 5/5 vs. 50% for phone. Average completion: 185 seconds vs. 191 seconds.

Experiment 2: Chatbot vs. Phone Satisfaction Results
Experiment 3 : Prototype vs. NYPD Website

I ran this experiment comparing our Figma prototype against the actual NYPD site. Chatbot users completed tasks in 100–200 seconds clustered tightly. NYPD site users were scattered across 250–500 seconds. Faster and more consistent.

Experiment 3: Prototype vs. NYPD Website Time Distribution
Design

The prototype.

I designed the full Figma prototype with a Parsons design teammate: a mobile-first chatbot interface that mapped every screen back to a pain point from our research.

Card-based crime selection

Replaced the NYPD site's static dropdown with tappable cards and expandable context, so users could self-assess eligibility without leaving the flow.

Inline validation

Built real-time checks for date and address directly into the conversation, catching errors before submission instead of failing silently like the NYPD site.

Clear outcome states

Designed distinct accepted/rejected screens with actionable next steps, not the ambiguous email NYPD sends days later.

Crime type selection: NYPD site vs. chatbot

Static dropdown with no context vs. tappable cards with expandable definitions

Outcome response: NYPD site vs. chatbot

A vague email days later vs. an immediate outcome with actionable next steps

The Product
01
Welcome screen
Welcome
02
Crime selection screen
Crime Selection
03
Outcome screen
Outcome
See It in Action
Chatbot prototype demo

These are select screens from the prototype. Explore the complete interaction flow on Figma →

Metrics

Measuring what matters.

Our North Star metric was report completion rate: the percentage of victims who successfully finish a report versus abandoning mid-flow. One number that captures both accessibility and usability, and the clearest signal of whether the chatbot was actually solving the problem.

While we recognize that NYPD may be constrained on resources and time, we recommended they adopt the chatbot as a long-term investment. The tool would save both victims and officers significant time while improving the overall crime reporting experience in New York City.