This Capstone project implements an automated fake news analyzer designed to assess the credibility of online articles in real time. Veritas functions as a browser extension that extracts the visible text of an article, identifies key claims, and forwards those claims to an external analysis agent. The system uses a Chrome extension for user-side interaction, a FastAPI backend for claim extraction and communication, and an ADK based Veritas Agent that conducts evidence retrieval and produces verdicts, truth scores, bias scores, and short explanatory summaries. Development follows the Unified Software Development Process, progressing through iterative and incremental cycles that emphasize requirements analysis, architectural design, implementation, and continuous evaluation. This structured approach supports rapid validation of the minimum viable product, clear separation of responsibilities across components, and a scalable foundation for future enhancements.
Online misinformation continues to spread at a pace that makes it difficult for readers to determine whether the claims in an article are factual, misleading, or intentionally biased. Many users do not have the time, expertise, or resources to independently research each claim they encounter, and traditional fact checking services often operate too slowly to keep up with fast moving content. Veritas addresses this gap by providing an automated, real time layer of analysis directly within the browser. By extracting claims from an article, evaluating them through a research driven agent, and displaying verdicts and scores in place, Veritas helps users quickly understand the reliability of the information in front of them. The goal is to give readers immediate clarity, reduce the influence of unverified or deceptive claims, and support more informed decision making without requiring them to leave the page or perform their own research.
The purpose of the Veritas system is to provide users with an immediate, research-supported assessment of the credibility of claims within online articles.
Figure 1 shows the high level workflow of the system. When a user clicks the Veritas icon while viewing an article, the browser extension captures the visible article text and page address and sends this data to a FastAPI service. The FastAPI service performs basic claim extraction on the article text and converts each claim into a short, structured statement. These statements are then forwarded to the Veritas Agent, which performs research, compares each claim with available evidence, and returns a verdict, truthfulness score, bias score, and brief explanation for every claim. FastAPI receives these results, normalizes them into a simple response format, and returns them to the extension. Finally, the extension annotates the original article view with visual indicators and tooltips so that the user can see credibility information in context.
flowchart LR
A[User clicks Veritas icon] --> B[Extension captures article text and URL]
B --> C[Extension sends JSON POST to FastAPI]
C --> D[FastAPI validates request]
D --> E[Claim extraction identifies and normalizes claims]
E --> F[FastAPI packages claims for ADK agent]
F --> G[FastAPI sends claims to Veritas Agent]
G --> H[ADK agent researches and produces verdicts and scores]
H --> I[FastAPI receives and normalizes results]
I --> J[FastAPI returns JSON results to extension]
J --> K[Extension injects badges into the article]
K --> L[User views annotated article]
The development of Veritas follows the Unified Software Development Process (USDP), a use-case driven, iterative, and architecture-centered methodology well suited for complex systems with multiple interacting components. USDP provides a structured approach that organizes development into clearly defined phases while allowing continuous refinement as the project evolves. This framework supports the needs of Veritas, which integrates a browser extension, a backend service, and an external analysis agent, each with distinct responsibilities.
The Inception Phase establishes the overall vision of Veritas, identifies the core problem of misinformation, and defines the minimum viable product. During this phase, the project clarifies its primary use case: enabling users to receive real time credibility evaluations on claims found in online articles.
The Elaboration Phase focuses on the architecture of the system. Here the team designs the interactions among the browser extension, FastAPI backend, and Veritas Agent. Major risks such as claim extraction accuracy, agent communication, and DOM annotation strategies are analyzed and addressed. This phase produces the foundational models, including workflow diagrams, architectural views, and class or component outlines that guide the remaining development.
The Construction Phase involves iterative implementation of Veritas in small, testable increments. Each iteration builds upon previous work, adding or refining features such as article capture, claim extraction, API communication, or result rendering. Testing is continuous throughout this phase, ensuring that components function both individually and as part of the full system flow.
The Transition Phase prepares the system for deployment and user evaluation. This includes packaging the browser extension, hosting the FastAPI service, validating communication with the Veritas Agent, and ensuring that the user interface accurately reflects the outputs of the analysis. Feedback from early testers informs final adjustments before full release.
Using USDP ensures that Veritas is developed with a strong architectural foundation, clear understanding of risks, and consistent refinement through iterative cycles. It supports the project’s technical complexity and enables the team to deliver a functional, extensible system aligned with the goals defined in the early phases.
flowchart LR
A[Inception Phase<br>Define vision and MVP] --> B[Elaboration Phase<br>Design architecture and address risks]
B --> C[Construction Phase<br>Iterative implementation and testing]
C --> D[Transition Phase<br>Deployment, validation, and user feedback]
Most people currently rely on three approaches to evaluate the credibility of online content:
-
Manual Verification: Readers independently search multiple sources, compare information, and try to determine whether a claim is true. This process is slow, inconsistent, and depends heavily on the reader’s skill and available time.
-
External Fact-Checking Websites: Users leave the article, search fact-checking platforms, and hope the specific claim has already been reviewed. These services cannot keep up with the volume of new content and rarely provide real-time analysis.
-
Passive Browser Notifications or Plugins: Some tools flag websites with low reliability scores, but they evaluate at the domain level, not the individual claims inside an article. These tools do not break down truthfulness or bias on a sentence-by-sentence basis.
Veritas improves on these systems in several ways:
-
Real-Time In-Article Claim Analysis: Instead of checking a whole website or asking users to look elsewhere, Veritas analyzes the specific claims inside the article while the user is reading it.
-
Automated Evidence Retrieval and Scoring: The system uses a research-driven agent that compares each claim to external evidence and produces both a categorical verdict and numerical scores, giving users more detail than generic reliability labels.
-
Contextual Annotation: Results are embedded directly into the article using badges and tooltips, allowing users to see which claims are trustworthy without leaving the page.
-
Faster and More Scalable Than Manual Fact-Checking: The workflow reduces reliance on human reviewers and allows for immediate evaluation of new or rapidly spreading content.
This project is developed by a team of three members: Diego Martinez, Christian Cevallos, and Justin Cardenas. Diego serves as both the team lead and an active developer. Christian and Justin focus on development tasks. The team follows a Scrum based planning structure in which work is divided into short iterations that allow continuous feedback and refinement. Diego is responsible for organizing Scrum meetings, defining iteration goals, guiding task assignments, and ensuring that progress aligns with the system vision. Each team member contributes to feature development, testing, and integration within the FastAPI backend, the browser extension, and the Veritas Agent workflow. The team conducts regular stand ups to review accomplishments, identify blockers, and plan next steps. Sprint planning sessions determine feature priorities, sprint reviews track completed functionality, and retrospectives help the team refine its processes. This structure supports a consistent pace of delivery, clear communication, and effective risk management throughout the development cycle.
| Team Member | Role | Responsibilities |
|---|---|---|
| Diego Martinez | Team Lead / Developer | Organizes Scrum meetings. Coordinates sprint planning, reviews, and retrospectives. Manages task assignments. Oversees system architecture. Contributes to development of the browser extension, FastAPI backend, and overall integration. |
| Christian Cevallos | Developer | Implements features within assigned sprints. Works on backend logic, claim processing, and extension functionality. Participates in testing and debugging cycles. |
| Justin Cardenas | Developer | Contributes to backend and extension development. Assists with feature implementation, testing, and sprint deliverables. Supports integration and refinement tasks. |
| Methodology Element | Description |
|---|---|
| Development Model | Scrum based iterative development with short, structured sprints. |
| Meetings | Diego leads daily stand ups, sprint planning, sprint reviews, and retrospectives. |
| Work Allocation | Tasks are assigned per sprint based on priority and team workload. |
| Collaboration | All members participate in feature development, testing, and integration. |
| Component | Minimum Requirement | Recommended Requirement | Purpose |
|---|---|---|---|
| CPU | Dual core processor | CPU with four or more cores | Running browser, IDE, FastAPI server |
| RAM | 8 GB | 16 GB or more | Smooth multitasking during development |
| Storage | 256 GB free disk space | 512 GB or more | Source code, dependencies, logs |
| Network | Stable broadband internet connection | High speed connection | Agent calls, dependency downloads |
| Display | Single monitor with 1080p resolution | Dual monitors with 1080p or higher resolution | Viewing code, browser, and logs at once |
| Test Devices | One desktop or laptop running a supported browser | Multiple machines or virtual machines for cross testing | Verifying extension behavior |
| Category | Software / Tool | Version or Equivalent | Usage |
|---|---|---|---|
| Operating System | Windows, macOS, or Linux | Recent stable release | Development environment |
| Web Browser | Google Chrome | Current stable version | Running and testing the browser extension |
| Browser Dev Tools | Chrome Developer Tools | Built in | Inspecting DOM, debugging the extension |
| Programming Language | Python | Version 3.10 or later | FastAPI backend and integration with ADK |
| Backend Framework | FastAPI | Current stable version | REST API that connects extension and agent |
| Package Manager | pip | Current stable version | Installing Python dependencies |
| Agent Platform | Google Cloud ADK or equivalent | Current environment version | Veritas analysis agent that evaluates claims |
| Extension Stack | JavaScript, HTML, CSS | ES6 or later | Building the Chrome extension UI and logic |
| Node Ecosystem | Node.js and npm or yarn | Current LTS | Tooling, bundling, and optional build scripts |
| IDE or Editor | VS Code, IntelliJ, PyCharm, or similar | Current stable version | Writing and managing code |
| Version Control | Git | Current stable version | Source control and collaboration |
| Repository Hosting | GitHub or similar | Web account | Remote repository and issue tracking |
| API Testing Tools | curl or REST client plugin | Any modern version | Testing FastAPI endpoints manually |
| Documentation Tools | Markdown and Mermaid support | GitHub native | Project documentation and workflow diagrams |
Use Case ID: VER-MVP-001-LaunchExtension
Level: System-level end-to-end
User Story:
As a news reader, I want to launch Veritas while viewing an article so that I can request in-place credibility analysis.
Actor:
User
Pre-Conditions:
- Veritas extension is installed and enabled
- User is viewing a supported article page
- Extension has permission to run on the site
Trigger:
User clicks the Veritas extension icon.
System Behavior:
- Extension opens its interface
- Extension verifies the current page is eligible for analysis
- Analysis session is initialized
Post-Conditions:
- An analysis session has begun for the active article
Use Case ID: VER-MVP-002-ParseArticleUrl
Level: System-level data processing
Actor:
System
System Behavior:
- Accepts a user-provided article URL
- Parses the URL to extract relevant components
- Validates the URL before analysis begins
Post-Conditions:
- A valid, structured URL is available for downstream processing
Use Case ID: VER-MVP-003-CapturePublicationDate
Level: Internal system process
Actor:
Veritas Browser Extension
System Behavior:
- Searches metadata for publication timestamps
- Parses structured data timestamps
- Attempts visible date extraction
- Normalizes the date
- Attaches publication date to metadata
Post-Conditions:
- Publication date is captured or marked unavailable
Use Case ID: VER-MVP-004-FormatPublicationDate
Level: System-level data normalization
Actor:
System
System Behavior:
- Receives raw publication date
- Converts date to YYYY-MM-DD format
- Ensures consistency across UI and storage
Post-Conditions:
- Publication dates are uniformly formatted
Use Case ID: VER-MVP-005-CaptureArticleText
Level: Internal system process
Actor:
Veritas Browser Extension
System Behavior:
- Identifies article container
- Extracts visible article text
- Removes non-article elements
- Captures URL and metadata
- Prepares data for backend submission
Post-Conditions:
- Cleaned article text is ready
Use Case ID: VER-MVP-006-SendArticleDataToBackend
Level: Internal system process
Actor:
Veritas Browser Extension
System Behavior:
- Forms JSON payload
- Sends POST request to FastAPI
- Updates UI to processing state
Post-Conditions:
- Backend receives article data
Use Case ID: VER-MVP-007-ValidateArticleData
Level: Internal system process
Actor:
FastAPI Backend
System Behavior:
- Validates required fields
- Rejects malformed payloads
- Forwards valid data
Post-Conditions:
- Payload accepted or rejected
Use Case ID: VER-MVP-008-ExtractClaims
Level: Internal system process
Actor:
FastAPI Claim Extraction Module
System Behavior:
- Splits text into candidate sentences
- Identifies claims
- Normalizes claims
- Assigns claim identifiers
Post-Conditions:
- Structured claim list prepared
Use Case ID: VER-MVP-009-DisplayClaimCount
Level: System-level UI feedback
User Story:
As a user, I want to see how many claims were extracted so that I understand the scope of the analysis.
Actor:
Veritas Browser Extension
Pre-Conditions:
- Claim extraction has completed
Trigger:
Claims are successfully extracted
System Behavior:
- Counts total extracted claims
- Displays claim count in popup UI
Post-Conditions:
- Claim count is visible to the user
Use Case ID: VER-MVP-010-DisplayExtractedClaims
Level: System-level UI interaction
User Story:
As a user, I want to view the extracted claims so that I know what statements are being analyzed.
Actor:
Veritas Browser Extension
Pre-Conditions:
- Claims have been extracted
Trigger:
User views claim list in popup
System Behavior:
- Renders extracted claims
- Displays claims in readable order
- Supports scrolling if needed
Post-Conditions:
- User can review extracted claims
Use Case ID: VER-MVP-011-PackageClaimsForAgent
Level: Internal system process
Actor:
FastAPI Backend
System Behavior:
- Builds agent-ready payload
- Includes claim IDs and metadata
- Prepares payload for transmission
Use Case ID: VER-MVP-012-SendClaimsToAgent
Level: Internal system process
Actor:
FastAPI Backend
System Behavior:
- Sends payload to ADK agent
- Handles retries and timeouts
Use Case ID: VER-MVP-013-AgentEvidenceAnalysis
Level: Internal system process
Actor:
Veritas Analysis Agent
System Behavior:
- Evaluates claims
- Assigns verdicts and scores
- Generates explanations
Use Case ID: VER-MVP-014-ReceiveAgentResults
Level: Internal system process
Actor:
FastAPI Backend
System Behavior:
- Parses agent response
- Validates structure
- Prepares results
Use Case ID: VER-MVP-015-ReceiveBackendResults
Level: Internal system process
Actor:
Veritas Browser Extension
System Behavior:
- Parses backend response
- Stores results
- Updates UI
Use Case ID: VER-MVP-016-AnnotateArticle
Level: System-level end-to-end
Actor:
User via Veritas Browser Extension
System Behavior:
- Maps claims to article text
- Inserts verdict indicators
- Displays explanations
Use Case ID: VER-MVP-017-ManageUserSettings
Level: System-level UI interaction
Actor:
User
System Behavior:
- Displays settings
- Loads preferences
- Saves changes
Use Case ID: VER-MVP-018-SyncUIBackendAPI
Level: System-level backend integration
Actor:
System
System Behavior:
- Aligns request/response schemas
- Maintains compatibility across components
Post-Conditions:
- UI and backend remain synchronized
flowchart LR
%% Actors
U([User])
EXT([Veritas Extension])
BE([FastAPI Backend])
AG([Veritas Agent])
%% Use Cases
UC1((UC1 Launch Extension on Article))
UC2((UC2 Capture Article Text))
UC3((UC3 Send Article Data to Backend))
UC4((UC4 Validate Incoming Article Data))
UC5((UC5 Extract Claims from Text))
UC6((UC6 Package Claims for Agent))
UC7((UC7 Send Claims to Veritas Agent))
UC8((UC8 Perform Evidence-Based Analysis))
UC9((UC9 Receive Analysis Results from Agent))
UC10((UC10 Receive Backend Results))
UC11((UC11 Annotate Article with Indicators))
%% Relationships
U --> UC1
EXT --> UC2
EXT --> UC3
BE --> UC4
BE --> UC5
BE --> UC6
BE --> UC7
AG --> UC8
BE --> UC9
EXT --> UC10
EXT --> UC11
U --> UC11
classDiagram
direction LR
class VeritasExtension {
+launchAnalysis()
+showLoading()
+renderAnnotations(results)
}
class ContentScript {
+captureArticleText(): string
+getPageUrl(): string
}
class AnalysisRequest {
-articleText: string
-url: string
-requestId: string
}
class FastAPIBackend {
+validateRequest(req)
+processArticle(req)
}
class ClaimExtractor {
+extractClaims(text): Claim[]
}
class AgentClient {
+sendClaims(claims): ClaimResult[]
}
class VeritasAgent {
+analyzeClaims(claims): ClaimResult[]
}
class Claim {
+id: string
+normalizedText: string
+sourceSpanRef: string
}
class ClaimResult {
+claimId: string
+verdict: string
+truthScore: int
+biasScore: int
+explanation: string
}
VeritasExtension --> ContentScript : uses
ContentScript --> AnalysisRequest : creates
VeritasExtension --> FastAPIBackend : sends requests
FastAPIBackend --> ClaimExtractor : uses
FastAPIBackend --> AgentClient : uses
ClaimExtractor --> Claim : creates
AgentClient --> VeritasAgent : calls
VeritasAgent --> ClaimResult : returns
FastAPIBackend --> ClaimResult : aggregates
VeritasExtension --> ClaimResult : displays
sequenceDiagram
autonumber
actor User
participant Extension as "Veritas Extension"
User->>Extension: Click Veritas icon
Extension->>Extension: Check if page is eligible
Extension->>Extension: Initialize analysis session
Extension-->>User: Show loading / analysis started UI
sequenceDiagram
autonumber
actor User
participant Extension as "Veritas Extension"
participant CS as "Content Script"
User->>Extension: Request analysis
Extension->>CS: Capture article text + URL
CS->>CS: Identify article container
CS->>CS: Extract and clean visible text
CS-->>Extension: Return cleaned text + URL
Extension->>Extension: Store article payload
sequenceDiagram
autonumber
actor User
participant Extension as "Veritas Extension"
participant Backend as "FastAPI Backend"
User->>Extension: Trigger analysis
Extension->>Extension: Build JSON payload
Extension->>Backend: POST /analyze_article
Backend-->>Extension: 200/202 Accepted
Extension->>Extension: Enter "processing" state
sequenceDiagram
autonumber
participant Extension as "Veritas Extension"
participant Backend as "FastAPI Backend"
Extension->>Backend: POST article text + URL
Backend->>Backend: Parse JSON body
Backend->>Backend: Validate required fields and length
alt Valid payload
Backend-->>Extension: 200 OK (processing started)
else Invalid payload
Backend-->>Extension: 4xx error + message
Extension->>Extension: Display validation error
end
sequenceDiagram
autonumber
participant Backend as "FastAPI Backend"
participant Extractor as "ClaimExtractor"
Backend->>Extractor: extractClaims(articleText)
Extractor->>Extractor: Split text into sentences
Extractor->>Extractor: Identify candidate claims
Extractor->>Extractor: Normalize claims and assign IDs
Extractor-->>Backend: List<Claim>
Backend->>Backend: Store structured claims
sequenceDiagram
autonumber
participant Backend as "FastAPI Backend"
Backend->>Backend: Read claims + article metadata
alt Claims found
Backend->>Backend: Build payload for agent
Backend-->>Backend: Payload ready
else No claims
Backend-->>Backend: Mark request as "no claims found"
end
sequenceDiagram
autonumber
participant Backend as "FastAPI Backend"
participant AgentClient as "AgentClient"
participant Agent as "Veritas Agent"
Backend->>AgentClient: sendClaims(payload)
AgentClient->>Agent: POST /analyze_claims
Agent->>Agent: Queue and run analysis
Agent-->>AgentClient: ClaimResult[] or error
AgentClient-->>Backend: Parsed results or error
sequenceDiagram
autonumber
participant Agent as "Veritas Agent"
participant Sources as "Evidence Sources"
Agent->>Agent: Iterate over claims
loop For each claim
Agent->>Sources: Search evidence sources
Sources-->>Agent: Return evidence snippets
Agent->>Agent: Compare claim vs evidence
Agent->>Agent: Assign verdict + scores
Agent->>Agent: Generate explanation
end
Agent-->>Agent: Build ClaimResult[] payload
sequenceDiagram
autonumber
participant Backend as "FastAPI Backend"
participant AgentClient as "AgentClient"
participant Agent as "Veritas Agent"
Backend->>AgentClient: Await analysis response
AgentClient->>Agent: Receive HTTP response
Agent-->>AgentClient: Return ClaimResult[]
AgentClient-->>Backend: Parsed ClaimResult[]
Backend->>Backend: Validate schema and fields
Backend->>Backend: Map results to claim IDs
Backend-->>Backend: Normalized dataset ready for extension
sequenceDiagram
autonumber
actor User
participant Extension as "Veritas Extension"
participant Backend as "FastAPI Backend"
Extension->>Backend: Await analysis response
Backend-->>Extension: JSON { claims[], verdicts, scores, explanations }
Extension->>Extension: Parse JSON
Extension->>Extension: Validate essential fields
Extension->>Extension: Store results for this tab
Extension-->>User: Update UI from "processing" to "ready"
sequenceDiagram
autonumber
actor User
participant Extension as "Veritas Extension"
participant CS as "Content Script"
Extension->>CS: Request claim-to-DOM mapping
CS->>CS: Locate spans/paragraphs for each claimId
CS-->>Extension: Mapping claimId -> DOM node
Extension->>CS: Inject badges and tooltip hooks
CS-->>User: Render annotated article
User->>CS: Hover or click badge
CS-->>User: Show tooltip (verdict, truth score, bias score, explanation)