Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
64 commits
Select commit Hold shift + click to select a range
4200d0a
feat: add AI guidance and VSCode configuration files
JustAGhosT May 19, 2025
a215317
Update .vscode/launch.json
JustAGhosT May 19, 2025
aceb115
Update .vscode/tasks.json
JustAGhosT May 19, 2025
dd6452f
feat(market-data): implement core market data service architecture
JustAGhosT May 20, 2025
9b2f18f
feat(core): add market data core abstractions and validation utilities
JustAGhosT May 20, 2025
95f20d1
chore: update project structure and configuration
JustAGhosT May 20, 2025
6cfa019
feat(infrastructure): implement secret cache and expand event publishing
JustAGhosT May 20, 2025
8f45f06
feat(vscode): configure Azure Functions development environment
JustAGhosT May 20, 2025
aa81c17
Fix(code) Refactor market data repository and service architecture
JustAGhosT May 20, 2025
877e5c3
Merge branch 'feat/aiguidance' into feat/abstraction
JustAGhosT May 20, 2025
3842aac
feat: add validation and event handling infrastructure
JustAGhosT May 21, 2025
2db467c
feat(validation): refactor validation system and add event processor
JustAGhosT May 21, 2025
9bc880a
feat(testing): add code coverage tooling and initial tests
JustAGhosT May 21, 2025
0020fa9
ci: add GitHub Actions workflow for build and test
JustAGhosT May 21, 2025
8eeec38
feat(marketdata): add DisplayAssetId property to preserve original ca…
JustAGhosT May 21, 2025
dad982e
Update src/Phoenix.MarketData.Infrastructure/Serialization/CryptoOrdi…
JustAGhosT May 21, 2025
19eb162
Merge pull request #6 from phoenixvc/feat/abstraction
JustAGhosT May 21, 2025
f394ea5
Merge pull request #5 from phoenixvc/feat/testing
JustAGhosT May 21, 2025
0f14fff
feat(cosmos): enhance CosmosRepository with partition key support and…
JustAGhosT May 21, 2025
b26e6e8
docs: enhance AI guidelines with code examples and implementation det…
JustAGhosT May 22, 2025
978c829
feat(core): add cancellation token support and improve repository int…
JustAGhosT May 22, 2025
4c2fb78
feat(api): add OpenAPI/Swagger documentation support
JustAGhosT May 22, 2025
dfcaffe
fix: improve validation and fix namespace issues
JustAGhosT May 22, 2025
a47ef53
feat(core): enhance secret cache and improve error handling
JustAGhosT May 22, 2025
f20f368
refactor(infrastructure): improve error handling and optimize CosmosD…
JustAGhosT May 22, 2025
7a4a461
feat: improve security and error handling
JustAGhosT May 22, 2025
a03b9ef
The changes refactor the CosmosRepository class to improve maintainab…
JustAGhosT May 22, 2025
71ff0d5
Test post-commit hook
JustAGhosT May 22, 2025
eca885f
Test post-commit with HTML reports
JustAGhosT May 22, 2025
de28de5
Test post-commit with HTML reports
JustAGhosT May 22, 2025
44a2c28
Test post-commit with HTML reports
JustAGhosT May 22, 2025
3ae60ac
Update .github/workflows/build-and-test.yml
JustAGhosT May 22, 2025
bbce4cb
Test post-commit with HTML reports
JustAGhosT May 22, 2025
3b1d19a
Update src/Phoenix.MarketData.Application/Events/MarketDataEventSubsc…
JustAGhosT May 22, 2025
df17832
Test post-commit with HTML reports
JustAGhosT May 22, 2025
a1d76f8
Test post-commit with HTML reports
JustAGhosT May 22, 2025
aed2cbc
Add coverage reports to gitignore
JustAGhosT May 22, 2025
2073c68
Add coverage reports to gitignore
JustAGhosT May 22, 2025
d9ce5db
Update .github/workflows/build-and-test.yml
JustAGhosT May 22, 2025
e707dc0
Update tests/Phoenix.MarketData.Infrastructure.Tests/generate-coverag…
JustAGhosT May 22, 2025
dd05d47
Update src/Phoenix.MarketData.Domain/Repositories/IRepository.cs
JustAGhosT May 22, 2025
48aad77
Update .github/workflows/update-guidelines-timestamp.yml
JustAGhosT May 22, 2025
9de5513
Update src/Phoenix.MarketData.Core/Repositories/IRepository.cs
JustAGhosT May 22, 2025
bb8960c
Update src/Phoenix.MarketData.Core/Configuration/ISecretCache.cs
JustAGhosT May 22, 2025
d1018d3
Update tests/Phoenix.MarketData.Infrastructure.Tests/generate-coverag…
JustAGhosT May 22, 2025
2b5a3ba
Update scripts/update-guidelines.js
JustAGhosT May 22, 2025
f57b19f
Update tests/Phoenix.MarketData.Infrastructure.Tests/generate-coverag…
JustAGhosT May 22, 2025
72ecdbc
Add coverage reports to gitignore
JustAGhosT May 22, 2025
e8ce251
ci: remove coverage threshold alerts from CodeCoverageSummary action
JustAGhosT May 22, 2025
518f87d
Haas
JustAGhosT May 22, 2025
7631136
Add coverage reports to gitignore
JustAGhosT May 22, 2025
d393efa
Fix: merge issues
JustAGhosT May 22, 2025
3206ad4
Fix: merge issues
JustAGhosT May 22, 2025
c7abc77
Fix: merge issues
JustAGhosT May 22, 2025
5c0e2e9
Fix: merge issues
JustAGhosT May 22, 2025
4b4310d
Fix: merge issues
JustAGhosT May 22, 2025
2805039
v4 upload
JustAGhosT May 23, 2025
1930df5
Merge pull request #8 from phoenixvc/fix/gh
JustAGhosT May 23, 2025
4789ab5
refactor: improve error handling and add region support to market dat…
JustAGhosT May 23, 2025
6036b1d
Merge branch 'feat/abstraction' of https://github.com/phoenixvc/Phoen…
JustAGhosT May 23, 2025
7362d7f
refactor: update test scripts and improve coverage report generation
JustAGhosT May 23, 2025
88df286
Apply suggestions from code review
JustAGhosT May 23, 2025
78e282f
refactor: migrate Core namespace to Domain namespace
JustAGhosT May 23, 2025
6e67dec
Merge branch 'feat/abstraction' of https://github.com/phoenixvc/Phoen…
JustAGhosT May 23, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
386 changes: 386 additions & 0 deletions .aiguidance/ai-guidelines.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,386 @@
# AI Guidelines for VeritasVault.net

## Table of Contents

- [AI Guidelines for VeritasVault.net](#ai-guidelines-for-veritasvaultnet)
- [Table of Contents](#table-of-contents)
- [Overview](#overview)
- [Core Principles](#core-principles)
- [Implementation Guidelines](#implementation-guidelines)
- [AI Component Architecture](#ai-component-architecture)
- [User Experience Guidelines](#user-experience-guidelines)
- [AI Features in VeritasVault.net](#ai-features-in-veritasvaultnet)
- [Risk Assessment](#risk-assessment)
- [Portfolio Optimization](#portfolio-optimization)
- [Market Analysis](#market-analysis)
- [Anomaly Detection](#anomaly-detection)
- [Development Workflow](#development-workflow)
- [Ethical Considerations](#ethical-considerations)
- [Compliance](#compliance)
- [Updates to These Guidelines](#updates-to-these-guidelines)

## Overview

This document outlines the guidelines and best practices for AI integration within the VeritasVault.net platform. These guidelines ensure consistent, ethical, and effective use of AI technologies across our enterprise liquidity management solutions.

## Core Principles

1. **Transparency**: All AI-driven decisions and recommendations should be explainable and transparent to users.
2. **Accuracy**: AI models should be regularly evaluated and validated for accuracy and reliability.
3. **Data Privacy**: AI systems must adhere to strict data privacy standards and regulations.
4. **User Control**: Users should maintain control over AI features and be able to override automated decisions.
5. **Continuous Improvement**: AI systems should be designed to learn and improve over time based on feedback.

## Implementation Guidelines

### AI Component Architecture

1. **Separation of Concerns**:

- AI logic should be separated from UI components
- Use dedicated services for AI processing
- Implement clear interfaces between AI services and the rest of the application

**Example: AI Service Interface**

```typescript
// src/services/ai/interfaces.ts
export interface AIService<T, R> {
process(input: T): Promise<AIResult<R>>;
getConfidenceScore(): number;
explainResult(result: AIResult<R>): string;
}

export interface AIResult<T> {
data: T;
confidence: number;
timestamp: Date;
modelVersion: string;
explanation?: string;
}
```

2. **Client-Server Considerations**:

- Heavy AI processing should occur on the server side
- Client-side AI should be limited to lightweight inference
- Consider using Web Workers for client-side AI to prevent UI blocking

**Example: Web Worker Setup**

```javascript
// src/workers/anomalyDetection.worker.js
self.onmessage = async (event) => {
const { timeseriesData, threshold } = event.data;

// Perform lightweight anomaly detection
const anomalies = detectAnomalies(timeseriesData, threshold);

// Return results to main thread
self.postMessage({ anomalies, processingTime: performance.now() });
};

// Usage in component
const anomalyWorker = new Worker(
new URL("./anomalyDetection.worker.js", import.meta.url),
);
anomalyWorker.onmessage = (event) => {
setAnomalies(event.data.anomalies);
setIsProcessing(false);
};
```

3. **Next.js Integration**:

- Use server components for AI data processing
- Wrap client components using AI hooks in Suspense boundaries
- Implement proper error boundaries for AI components

**Example: Next.js AI Component Pattern**

```tsx
// app/portfolios/[id]/risk-assessment/page.tsx
import { Suspense } from "react";
import { AIErrorBoundary } from "@/components/error-boundaries";
import { RiskAssessmentSkeleton } from "@/components/skeletons";
import { RiskAssessmentComponent } from "@/components/risk-assessment";

export default function RiskAssessmentPage({
params,
}: {
params: { id: string };
}) {
return (
<div className="risk-container">
<h1>Portfolio Risk Assessment</h1>

<AIErrorBoundary
fallback={
<div>
Unable to generate risk assessment. View manual analysis instead.
</div>
}
>
<Suspense fallback={<RiskAssessmentSkeleton />}>
<RiskAssessmentComponent portfolioId={params.id} />
</Suspense>
</AIErrorBoundary>
</div>
);
}
```

### User Experience Guidelines

1. **Loading States**:

- Always provide clear loading indicators for AI operations
- Consider skeleton screens for AI-populated content
- Implement progressive loading for AI-heavy dashboards

2. **Error Handling**:

- Provide graceful fallbacks when AI services fail
- Offer manual alternatives to automated AI features
- Communicate errors in user-friendly language

**Example: Error Boundary Implementation**

```tsx
// components/error-boundaries.tsx
"use client";

import { Component, ErrorInfo, ReactNode } from "react";
import { reportAIError } from "@/lib/monitoring";

interface Props {
children: ReactNode;
fallback: ReactNode;
}

interface State {
hasError: boolean;
error?: Error;
}

export class AIErrorBoundary extends Component<Props, State> {
constructor(props: Props) {
super(props);
this.state = { hasError: false };
}

static getDerivedStateFromError(error: Error): State {
return { hasError: true, error };
}

componentDidCatch(error: Error, errorInfo: ErrorInfo): void {
reportAIError({
error,
component: errorInfo.componentStack,
timestamp: new Date(),
});
}

render() {
if (this.state.hasError) {
return this.props.fallback;
}

return this.props.children;
}
}
```

3. **Feedback Mechanisms**:
- Allow users to provide feedback on AI recommendations
- Track and analyze user interactions with AI features
- Implement mechanisms to report and address AI inaccuracies

## AI Features in VeritasVault.net

### Risk Assessment

- Implement confidence scores with all risk assessments
- Provide detailed explanations for risk categorizations
- Allow users to adjust risk parameters and see updated assessments

### Portfolio Optimization

- Clearly indicate when recommendations are AI-generated
- Show the reasoning behind allocation recommendations
- Allow users to set constraints for optimization algorithms

### Market Analysis

- Distinguish between factual market data and AI predictions
- Provide historical accuracy metrics for predictive features
- Update prediction models regularly with new market data

### Anomaly Detection

- Set appropriate thresholds to minimize false positives
- Provide context for detected anomalies
- Implement user feedback for anomaly reports

## Development Workflow

1. **Testing AI Components**:

- Create specific test cases for AI functionality
- Test with diverse and edge-case data
- Implement A/B testing for new AI features

**Example: AI Component Test**

```typescript
// __tests__/ai/risk-assessment.test.ts
import { RiskAssessmentService } from "@/services/ai/risk-assessment";

describe("Risk Assessment Service", () => {
const mockPortfolio = {
id: "port-123",
assets: [
{ type: "equity", ticker: "AAPL", allocation: 0.2 },
{ type: "fixed_income", ticker: "BND", allocation: 0.5 },
{ type: "crypto", ticker: "BTC", allocation: 0.3 },
],
};

it("should provide risk scores for all asset classes", async () => {
const service = new RiskAssessmentService();
const result = await service.process(mockPortfolio);

expect(result.data.overallRiskScore).toBeDefined();
expect(result.data.assetClassRisks).toHaveProperty("equity");
expect(result.data.assetClassRisks).toHaveProperty("fixed_income");
expect(result.data.assetClassRisks).toHaveProperty("crypto");
expect(result.confidence).toBeGreaterThan(0.7);
});

it("should identify high-risk assets correctly", async () => {
const service = new RiskAssessmentService();
const result = await service.process(mockPortfolio);

expect(result.data.highRiskAssets).toContain("BTC");
});
});
```

2. **Versioning**:

- Track AI model versions in application code
- Document changes between model versions
- Implement feature flags for new AI capabilities

3. **Monitoring**:

- Log AI performance metrics
- Track user engagement with AI features
- Monitor for drift in AI model accuracy

4. **CI/CD Integration**:

- Automate AI model testing with each code change
- Implement quality gates for AI performance metrics
- Include AI-specific resources in infrastructure as code

**Example: GitHub Actions Workflow for AI Testing**

```yaml
# .github/workflows/ai-model-tests.yml
name: AI Model Testing

on:
push:
branches: [main]
paths:
- "src/services/ai/**"
- "models/**"
pull_request:
branches: [main]

jobs:
test:
runs-on: ubuntu-latest

steps:
- uses: actions/checkout@v3

- name: Set up Node.js
uses: actions/setup-node@v3
with:
node-version: "20"
cache: "npm"

- name: Install dependencies
run: npm ci

- name: Run AI unit tests
run: npm run test:ai

- name: Run AI model performance benchmark
run: npm run benchmark:ai

- name: Verify model accuracy is above threshold
run: node scripts/verify-model-accuracy.js --threshold=0.85

- name: Report AI test metrics
if: always()
run: node scripts/report-ai-metrics.js
env:
METRICS_API_KEY: ${{ secrets.METRICS_API_KEY }}
```

## Ethical Considerations

1. **Fairness**:

- Regularly audit AI systems for bias
- Test with diverse data representing all user groups
- Implement fairness metrics in AI evaluation

2. **Accountability**:

- Clearly define responsibility for AI-driven decisions
- Maintain audit trails for critical AI operations
- Establish review processes for AI systems

3. **Sustainability**:
- Optimize AI models for computational efficiency
- Consider environmental impact of training and inference
- Balance model complexity with resource usage

## Compliance

1. **Regulatory Adherence**:

- Ensure AI systems comply with financial regulations
- Maintain documentation for regulatory review
- Implement controls for regulated AI use cases

**Regulatory References:**

- [SEC 17 CFR § 275.204-2(a)(16)](https://www.ecfr.gov/current/title-17/section-275.204-2) - Books and records to be maintained by investment advisers
- [GDPR Article 22](https://gdpr-info.eu/art-22-gdpr/) - Automated individual decision-making, including profiling
- [FINRA Regulatory Notice 21-25](https://www.finra.org/rules-guidance/notices/21-25) - Artificial Intelligence in the Securities Industry
- [Federal Reserve SR Letter 11-7](https://www.federalreserve.gov/supervisionreg/srletters/sr1107.htm) - Guidance on Model Risk Management

Comment on lines +363 to +367
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Verify regulatory references are current and applicable.

The regulatory references cited should be verified to ensure they are current and applicable to the financial services provided by VeritasVault.net.


🌐 Web query:

Are SEC 17 CFR § 275.204-2(a)(16), FINRA Regulatory Notice 21-25, and Federal Reserve SR Letter 11-7 the current regulatory standards for AI in financial services in 2025?

💡 Result:

Based on the search results, the regulatory landscape for AI in financial services in 2025 appears more complex than the specific regulations mentioned in your query. While some of these regulations remain relevant, there have been significant developments in how regulators approach AI governance.

Current Regulatory Framework for AI in Financial Services (2025)

The regulatory approach to AI in financial services as of May 2025 is characterized by a combination of existing frameworks being applied to new technologies, rather than a single comprehensive AI regulation. Despite the focus on AI by U.S. financial regulators, the SEC, CFTC, and FINRA have not yet issued new regulations specifically addressing AI use[5].

FINRA's Current Approach

FINRA has emphasized a technology-neutral regulatory approach, meaning that existing rules apply to AI just as they would to any other technology. In January 2025, FINRA published its Annual Regulatory Oversight Report which highlighted several AI-related regulatory risks[1][5]. This report addressed:

  • Financial crimes prevention, including how bad actors exploit generative AI for investment scams, synthetic ID creation, and account fraud[1]
  • Emerging risks related to third-party vendors providing AI tools[5]
  • The need for supervision of AI usage at both enterprise and individual levels[5]

FINRA's guidance advises firms to identify risks related to AI accuracy or bias, mitigate cybersecurity risks, and implement robust cybersecurity programs to combat AI-driven threats[5].

SEC Recordkeeping Requirements

Regarding SEC Rule 17 CFR § 275.204-2(a)(16), the search results indicate that various SEC recordkeeping provisions may be implicated by AI use. In 2023, the SEC proposed amendments to Exchange Act Rules 17a-3 and 17a-4 and Advisers Act Rule 204-2 to require broker-dealers and investment advisers to maintain books and records related to the requirements of proposed conflict rules for AI use[8]. However, the search results do not specifically confirm whether these proposals have been finalized or implemented as of May 2025.

Federal Reserve Guidance

The search results mention Federal Reserve SR Letter 11-7 on Supervisory Guidance on Model Risk Management in the context of AI cybersecurity risk management[7]. This suggests that this guidance remains relevant in 2025, though it may be supplemented by newer frameworks.

Recent Regulatory Developments

Executive Order and Policy Shifts

A significant development was the Trump administration's January 23, 2025, executive order on AI, which directs the development of an action plan to "sustain and enhance America's global AI dominance"[5]. This signals a potential shift in the regulatory approach compared to the previous administration.

FINRA Forward Initiative

On April 21, 2025, FINRA unveiled "FINRA Forward," a broad review of its rules and regulatory framework intended to modernize existing regulations[10]. This initiative includes:

  • Focus areas of capital formation and the modern workplace
  • A series of Regulatory Notices (25-04, 25-05, 25-06, and 25-07) outlining proposed changes
  • Consideration of changes to existing rules designed to help prevent fraud
  • Exploration of ways FINRA systems can be leveraged to support member compliance[10]

Treasury Department Guidance

The Treasury Department has also been active in providing guidance on AI in financial services. A report on "Managing Artificial Intelligence-Specific Cybersecurity Risks in the Financial Services Sector" was issued in response to Executive Order 14110[7]. This report provides best practices for financial institutions to manage AI-specific cybersecurity risks.

In summary, while SEC 17 CFR § 275.204-2(a)(16), FINRA Regulatory Notice 21-25, and Federal Reserve SR Letter 11-7 remain part of the regulatory framework, they do not represent the complete or most current standards for AI in financial services as of May 2025. The regulatory landscape continues to evolve, with FINRA's technology-neutral approach, ongoing SEC proposals, Treasury Department guidance, and the new executive order all shaping how financial institutions should approach AI governance and compliance.

Citations:


Update AI Regulatory References for 2025
The four citations below remain relevant but no longer reflect the full AI governance landscape. Please expand this list in .aiguidance/ai-guidelines.md (lines 363–367) to include recent U.S. financial-services AI developments:

  • FINRA’s 2025 Annual Regulatory Oversight Report on AI-related risks and supervision ([2025-Annual-Regulatory-Oversight-Report.pdf])
  • U.S. Treasury Department’s “Managing Artificial Intelligence-Specific Cybersecurity Risks in the Financial Services Sector” report
  • SEC’s 2023–2025 proposed amendments to Exchange Act Rules 17a-3/17a-4 and Advisers Act Rule 204-2 covering AI recordkeeping
  • Executive Order 14110 (Jan 23 2025) on “Ensuring Innovation in Artificial Intelligence”
  • FINRA Forward initiative and its Regulatory Notices 25-04 through 25-07

Adjust the snippet accordingly to ensure VeritasVault.net’s AI guidelines align with the current regulatory framework.

🤖 Prompt for AI Agents
In .aiguidance/ai-guidelines.md around lines 363 to 367, the current regulatory
references for AI in financial services are outdated and incomplete. Expand the
list to include recent developments such as FINRA’s 2025 Annual Regulatory
Oversight Report on AI risks, the U.S. Treasury Department’s report on managing
AI-specific cybersecurity risks, SEC’s 2023–2025 proposed amendments on AI
recordkeeping, the January 2025 Executive Order 14110 on AI innovation, and the
FINRA Forward initiative with its Regulatory Notices 25-04 to 25-07. Update the
snippet to reflect these additions for accurate and current AI governance
guidance.

2. **Data Governance**:

- Establish clear data retention policies for AI training
- Document data lineage for AI models
- Implement data minimization practices

**Regulatory References:**

- [GDPR Article 5](https://gdpr-info.eu/art-5-gdpr/) - Principles relating to processing of personal data
- [CCPA §1798.100](https://leginfo.legislature.ca.gov/faces/codes_displaySection.xhtml?lawCode=CIV&sectionNum=1798.100) - Consumer right to know
- [NY DFS 23 NYCRR 500](https://www.dfs.ny.gov/industry_guidance/cybersecurity) - Cybersecurity Requirements for Financial Services Companies

## Updates to These Guidelines

These guidelines will be reviewed and updated quarterly to reflect new AI capabilities, regulatory changes, and industry best practices. All team members are encouraged to contribute suggestions for improvements.

> **Note:** The "Last Updated" date at the bottom of this document is automatically maintained by a GitHub Actions workflow whenever changes are pushed to this file. Please do not modify this date manually.

Last Updated: May 11, 2025
6 changes: 6 additions & 0 deletions .aiguidance/context-providers.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
{
"currentFile": { "maxTokens": 20000 },
"diff": { "maxTokens": 10000 },
"terminal": { "maxLines": 1000 },
"folder": { "maxDepth": 4, "maxFiles": 30, "maxFileSize": 80000 }
}
Loading