-
Notifications
You must be signed in to change notification settings - Fork 1
Feat/abstraction #7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Open
JustAGhosT
wants to merge
64
commits into
master
Choose a base branch
from
feat/abstraction
base: master
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
Changes from all commits
Commits
Show all changes
64 commits
Select commit
Hold shift + click to select a range
4200d0a
feat: add AI guidance and VSCode configuration files
JustAGhosT a215317
Update .vscode/launch.json
JustAGhosT aceb115
Update .vscode/tasks.json
JustAGhosT dd6452f
feat(market-data): implement core market data service architecture
JustAGhosT 9b2f18f
feat(core): add market data core abstractions and validation utilities
JustAGhosT 95f20d1
chore: update project structure and configuration
JustAGhosT 6cfa019
feat(infrastructure): implement secret cache and expand event publishing
JustAGhosT 8f45f06
feat(vscode): configure Azure Functions development environment
JustAGhosT aa81c17
Fix(code) Refactor market data repository and service architecture
JustAGhosT 877e5c3
Merge branch 'feat/aiguidance' into feat/abstraction
JustAGhosT 3842aac
feat: add validation and event handling infrastructure
JustAGhosT 2db467c
feat(validation): refactor validation system and add event processor
JustAGhosT 9bc880a
feat(testing): add code coverage tooling and initial tests
JustAGhosT 0020fa9
ci: add GitHub Actions workflow for build and test
JustAGhosT 8eeec38
feat(marketdata): add DisplayAssetId property to preserve original ca…
JustAGhosT dad982e
Update src/Phoenix.MarketData.Infrastructure/Serialization/CryptoOrdi…
JustAGhosT 19eb162
Merge pull request #6 from phoenixvc/feat/abstraction
JustAGhosT f394ea5
Merge pull request #5 from phoenixvc/feat/testing
JustAGhosT 0f14fff
feat(cosmos): enhance CosmosRepository with partition key support and…
JustAGhosT b26e6e8
docs: enhance AI guidelines with code examples and implementation det…
JustAGhosT 978c829
feat(core): add cancellation token support and improve repository int…
JustAGhosT 4c2fb78
feat(api): add OpenAPI/Swagger documentation support
JustAGhosT dfcaffe
fix: improve validation and fix namespace issues
JustAGhosT a47ef53
feat(core): enhance secret cache and improve error handling
JustAGhosT f20f368
refactor(infrastructure): improve error handling and optimize CosmosD…
JustAGhosT 7a4a461
feat: improve security and error handling
JustAGhosT a03b9ef
The changes refactor the CosmosRepository class to improve maintainab…
JustAGhosT 71ff0d5
Test post-commit hook
JustAGhosT eca885f
Test post-commit with HTML reports
JustAGhosT de28de5
Test post-commit with HTML reports
JustAGhosT 44a2c28
Test post-commit with HTML reports
JustAGhosT 3ae60ac
Update .github/workflows/build-and-test.yml
JustAGhosT bbce4cb
Test post-commit with HTML reports
JustAGhosT 3b1d19a
Update src/Phoenix.MarketData.Application/Events/MarketDataEventSubsc…
JustAGhosT df17832
Test post-commit with HTML reports
JustAGhosT a1d76f8
Test post-commit with HTML reports
JustAGhosT aed2cbc
Add coverage reports to gitignore
JustAGhosT 2073c68
Add coverage reports to gitignore
JustAGhosT d9ce5db
Update .github/workflows/build-and-test.yml
JustAGhosT e707dc0
Update tests/Phoenix.MarketData.Infrastructure.Tests/generate-coverag…
JustAGhosT dd05d47
Update src/Phoenix.MarketData.Domain/Repositories/IRepository.cs
JustAGhosT 48aad77
Update .github/workflows/update-guidelines-timestamp.yml
JustAGhosT 9de5513
Update src/Phoenix.MarketData.Core/Repositories/IRepository.cs
JustAGhosT bb8960c
Update src/Phoenix.MarketData.Core/Configuration/ISecretCache.cs
JustAGhosT d1018d3
Update tests/Phoenix.MarketData.Infrastructure.Tests/generate-coverag…
JustAGhosT 2b5a3ba
Update scripts/update-guidelines.js
JustAGhosT f57b19f
Update tests/Phoenix.MarketData.Infrastructure.Tests/generate-coverag…
JustAGhosT 72ecdbc
Add coverage reports to gitignore
JustAGhosT e8ce251
ci: remove coverage threshold alerts from CodeCoverageSummary action
JustAGhosT 518f87d
Haas
JustAGhosT 7631136
Add coverage reports to gitignore
JustAGhosT d393efa
Fix: merge issues
JustAGhosT 3206ad4
Fix: merge issues
JustAGhosT c7abc77
Fix: merge issues
JustAGhosT 5c0e2e9
Fix: merge issues
JustAGhosT 4b4310d
Fix: merge issues
JustAGhosT 2805039
v4 upload
JustAGhosT 1930df5
Merge pull request #8 from phoenixvc/fix/gh
JustAGhosT 4789ab5
refactor: improve error handling and add region support to market dat…
JustAGhosT 6036b1d
Merge branch 'feat/abstraction' of https://github.com/phoenixvc/Phoen…
JustAGhosT 7362d7f
refactor: update test scripts and improve coverage report generation
JustAGhosT 88df286
Apply suggestions from code review
JustAGhosT 78e282f
refactor: migrate Core namespace to Domain namespace
JustAGhosT 6e67dec
Merge branch 'feat/abstraction' of https://github.com/phoenixvc/Phoen…
JustAGhosT File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,386 @@ | ||
| # AI Guidelines for VeritasVault.net | ||
|
|
||
| ## Table of Contents | ||
|
|
||
| - [AI Guidelines for VeritasVault.net](#ai-guidelines-for-veritasvaultnet) | ||
| - [Table of Contents](#table-of-contents) | ||
| - [Overview](#overview) | ||
| - [Core Principles](#core-principles) | ||
| - [Implementation Guidelines](#implementation-guidelines) | ||
| - [AI Component Architecture](#ai-component-architecture) | ||
| - [User Experience Guidelines](#user-experience-guidelines) | ||
| - [AI Features in VeritasVault.net](#ai-features-in-veritasvaultnet) | ||
| - [Risk Assessment](#risk-assessment) | ||
| - [Portfolio Optimization](#portfolio-optimization) | ||
| - [Market Analysis](#market-analysis) | ||
| - [Anomaly Detection](#anomaly-detection) | ||
| - [Development Workflow](#development-workflow) | ||
| - [Ethical Considerations](#ethical-considerations) | ||
| - [Compliance](#compliance) | ||
| - [Updates to These Guidelines](#updates-to-these-guidelines) | ||
|
|
||
| ## Overview | ||
|
|
||
| This document outlines the guidelines and best practices for AI integration within the VeritasVault.net platform. These guidelines ensure consistent, ethical, and effective use of AI technologies across our enterprise liquidity management solutions. | ||
|
|
||
| ## Core Principles | ||
|
|
||
| 1. **Transparency**: All AI-driven decisions and recommendations should be explainable and transparent to users. | ||
| 2. **Accuracy**: AI models should be regularly evaluated and validated for accuracy and reliability. | ||
| 3. **Data Privacy**: AI systems must adhere to strict data privacy standards and regulations. | ||
| 4. **User Control**: Users should maintain control over AI features and be able to override automated decisions. | ||
| 5. **Continuous Improvement**: AI systems should be designed to learn and improve over time based on feedback. | ||
|
|
||
| ## Implementation Guidelines | ||
|
|
||
| ### AI Component Architecture | ||
|
|
||
| 1. **Separation of Concerns**: | ||
|
|
||
| - AI logic should be separated from UI components | ||
| - Use dedicated services for AI processing | ||
| - Implement clear interfaces between AI services and the rest of the application | ||
|
|
||
| **Example: AI Service Interface** | ||
|
|
||
| ```typescript | ||
| // src/services/ai/interfaces.ts | ||
| export interface AIService<T, R> { | ||
| process(input: T): Promise<AIResult<R>>; | ||
| getConfidenceScore(): number; | ||
| explainResult(result: AIResult<R>): string; | ||
| } | ||
|
|
||
| export interface AIResult<T> { | ||
| data: T; | ||
| confidence: number; | ||
| timestamp: Date; | ||
| modelVersion: string; | ||
| explanation?: string; | ||
| } | ||
| ``` | ||
|
|
||
| 2. **Client-Server Considerations**: | ||
|
|
||
| - Heavy AI processing should occur on the server side | ||
| - Client-side AI should be limited to lightweight inference | ||
| - Consider using Web Workers for client-side AI to prevent UI blocking | ||
|
|
||
| **Example: Web Worker Setup** | ||
|
|
||
| ```javascript | ||
| // src/workers/anomalyDetection.worker.js | ||
| self.onmessage = async (event) => { | ||
| const { timeseriesData, threshold } = event.data; | ||
|
|
||
| // Perform lightweight anomaly detection | ||
| const anomalies = detectAnomalies(timeseriesData, threshold); | ||
|
|
||
| // Return results to main thread | ||
| self.postMessage({ anomalies, processingTime: performance.now() }); | ||
| }; | ||
|
|
||
| // Usage in component | ||
| const anomalyWorker = new Worker( | ||
| new URL("./anomalyDetection.worker.js", import.meta.url), | ||
| ); | ||
| anomalyWorker.onmessage = (event) => { | ||
| setAnomalies(event.data.anomalies); | ||
| setIsProcessing(false); | ||
| }; | ||
| ``` | ||
|
|
||
| 3. **Next.js Integration**: | ||
|
|
||
| - Use server components for AI data processing | ||
| - Wrap client components using AI hooks in Suspense boundaries | ||
| - Implement proper error boundaries for AI components | ||
|
|
||
| **Example: Next.js AI Component Pattern** | ||
|
|
||
| ```tsx | ||
| // app/portfolios/[id]/risk-assessment/page.tsx | ||
| import { Suspense } from "react"; | ||
| import { AIErrorBoundary } from "@/components/error-boundaries"; | ||
| import { RiskAssessmentSkeleton } from "@/components/skeletons"; | ||
| import { RiskAssessmentComponent } from "@/components/risk-assessment"; | ||
|
|
||
| export default function RiskAssessmentPage({ | ||
| params, | ||
| }: { | ||
| params: { id: string }; | ||
| }) { | ||
| return ( | ||
| <div className="risk-container"> | ||
| <h1>Portfolio Risk Assessment</h1> | ||
|
|
||
| <AIErrorBoundary | ||
| fallback={ | ||
| <div> | ||
| Unable to generate risk assessment. View manual analysis instead. | ||
| </div> | ||
| } | ||
| > | ||
| <Suspense fallback={<RiskAssessmentSkeleton />}> | ||
| <RiskAssessmentComponent portfolioId={params.id} /> | ||
| </Suspense> | ||
| </AIErrorBoundary> | ||
| </div> | ||
| ); | ||
| } | ||
| ``` | ||
|
|
||
| ### User Experience Guidelines | ||
|
|
||
| 1. **Loading States**: | ||
|
|
||
| - Always provide clear loading indicators for AI operations | ||
| - Consider skeleton screens for AI-populated content | ||
| - Implement progressive loading for AI-heavy dashboards | ||
|
|
||
| 2. **Error Handling**: | ||
|
|
||
| - Provide graceful fallbacks when AI services fail | ||
| - Offer manual alternatives to automated AI features | ||
| - Communicate errors in user-friendly language | ||
|
|
||
| **Example: Error Boundary Implementation** | ||
|
|
||
| ```tsx | ||
| // components/error-boundaries.tsx | ||
| "use client"; | ||
|
|
||
| import { Component, ErrorInfo, ReactNode } from "react"; | ||
| import { reportAIError } from "@/lib/monitoring"; | ||
|
|
||
| interface Props { | ||
| children: ReactNode; | ||
| fallback: ReactNode; | ||
| } | ||
|
|
||
| interface State { | ||
| hasError: boolean; | ||
| error?: Error; | ||
| } | ||
|
|
||
| export class AIErrorBoundary extends Component<Props, State> { | ||
| constructor(props: Props) { | ||
| super(props); | ||
| this.state = { hasError: false }; | ||
| } | ||
|
|
||
| static getDerivedStateFromError(error: Error): State { | ||
| return { hasError: true, error }; | ||
| } | ||
|
|
||
| componentDidCatch(error: Error, errorInfo: ErrorInfo): void { | ||
| reportAIError({ | ||
| error, | ||
| component: errorInfo.componentStack, | ||
| timestamp: new Date(), | ||
| }); | ||
| } | ||
|
|
||
| render() { | ||
| if (this.state.hasError) { | ||
| return this.props.fallback; | ||
| } | ||
|
|
||
| return this.props.children; | ||
| } | ||
| } | ||
| ``` | ||
|
|
||
| 3. **Feedback Mechanisms**: | ||
| - Allow users to provide feedback on AI recommendations | ||
| - Track and analyze user interactions with AI features | ||
| - Implement mechanisms to report and address AI inaccuracies | ||
|
|
||
| ## AI Features in VeritasVault.net | ||
|
|
||
| ### Risk Assessment | ||
|
|
||
| - Implement confidence scores with all risk assessments | ||
| - Provide detailed explanations for risk categorizations | ||
| - Allow users to adjust risk parameters and see updated assessments | ||
|
|
||
| ### Portfolio Optimization | ||
|
|
||
| - Clearly indicate when recommendations are AI-generated | ||
| - Show the reasoning behind allocation recommendations | ||
| - Allow users to set constraints for optimization algorithms | ||
|
|
||
| ### Market Analysis | ||
|
|
||
| - Distinguish between factual market data and AI predictions | ||
| - Provide historical accuracy metrics for predictive features | ||
| - Update prediction models regularly with new market data | ||
|
|
||
| ### Anomaly Detection | ||
|
|
||
| - Set appropriate thresholds to minimize false positives | ||
| - Provide context for detected anomalies | ||
| - Implement user feedback for anomaly reports | ||
|
|
||
| ## Development Workflow | ||
|
|
||
| 1. **Testing AI Components**: | ||
|
|
||
| - Create specific test cases for AI functionality | ||
| - Test with diverse and edge-case data | ||
| - Implement A/B testing for new AI features | ||
|
|
||
| **Example: AI Component Test** | ||
|
|
||
| ```typescript | ||
| // __tests__/ai/risk-assessment.test.ts | ||
| import { RiskAssessmentService } from "@/services/ai/risk-assessment"; | ||
|
|
||
| describe("Risk Assessment Service", () => { | ||
| const mockPortfolio = { | ||
| id: "port-123", | ||
| assets: [ | ||
| { type: "equity", ticker: "AAPL", allocation: 0.2 }, | ||
| { type: "fixed_income", ticker: "BND", allocation: 0.5 }, | ||
| { type: "crypto", ticker: "BTC", allocation: 0.3 }, | ||
| ], | ||
| }; | ||
|
|
||
| it("should provide risk scores for all asset classes", async () => { | ||
| const service = new RiskAssessmentService(); | ||
| const result = await service.process(mockPortfolio); | ||
|
|
||
| expect(result.data.overallRiskScore).toBeDefined(); | ||
| expect(result.data.assetClassRisks).toHaveProperty("equity"); | ||
| expect(result.data.assetClassRisks).toHaveProperty("fixed_income"); | ||
| expect(result.data.assetClassRisks).toHaveProperty("crypto"); | ||
| expect(result.confidence).toBeGreaterThan(0.7); | ||
| }); | ||
|
|
||
| it("should identify high-risk assets correctly", async () => { | ||
| const service = new RiskAssessmentService(); | ||
| const result = await service.process(mockPortfolio); | ||
|
|
||
| expect(result.data.highRiskAssets).toContain("BTC"); | ||
| }); | ||
| }); | ||
| ``` | ||
|
|
||
| 2. **Versioning**: | ||
|
|
||
| - Track AI model versions in application code | ||
| - Document changes between model versions | ||
| - Implement feature flags for new AI capabilities | ||
|
|
||
| 3. **Monitoring**: | ||
|
|
||
| - Log AI performance metrics | ||
| - Track user engagement with AI features | ||
| - Monitor for drift in AI model accuracy | ||
|
|
||
| 4. **CI/CD Integration**: | ||
|
|
||
| - Automate AI model testing with each code change | ||
| - Implement quality gates for AI performance metrics | ||
| - Include AI-specific resources in infrastructure as code | ||
|
|
||
| **Example: GitHub Actions Workflow for AI Testing** | ||
|
|
||
| ```yaml | ||
| # .github/workflows/ai-model-tests.yml | ||
| name: AI Model Testing | ||
|
|
||
| on: | ||
| push: | ||
| branches: [main] | ||
| paths: | ||
| - "src/services/ai/**" | ||
| - "models/**" | ||
| pull_request: | ||
| branches: [main] | ||
|
|
||
| jobs: | ||
| test: | ||
| runs-on: ubuntu-latest | ||
|
|
||
| steps: | ||
| - uses: actions/checkout@v3 | ||
|
|
||
| - name: Set up Node.js | ||
| uses: actions/setup-node@v3 | ||
| with: | ||
| node-version: "20" | ||
| cache: "npm" | ||
|
|
||
| - name: Install dependencies | ||
| run: npm ci | ||
|
|
||
| - name: Run AI unit tests | ||
| run: npm run test:ai | ||
|
|
||
| - name: Run AI model performance benchmark | ||
| run: npm run benchmark:ai | ||
|
|
||
| - name: Verify model accuracy is above threshold | ||
| run: node scripts/verify-model-accuracy.js --threshold=0.85 | ||
|
|
||
| - name: Report AI test metrics | ||
| if: always() | ||
| run: node scripts/report-ai-metrics.js | ||
| env: | ||
| METRICS_API_KEY: ${{ secrets.METRICS_API_KEY }} | ||
| ``` | ||
|
|
||
| ## Ethical Considerations | ||
|
|
||
| 1. **Fairness**: | ||
|
|
||
| - Regularly audit AI systems for bias | ||
| - Test with diverse data representing all user groups | ||
| - Implement fairness metrics in AI evaluation | ||
|
|
||
| 2. **Accountability**: | ||
|
|
||
| - Clearly define responsibility for AI-driven decisions | ||
| - Maintain audit trails for critical AI operations | ||
| - Establish review processes for AI systems | ||
|
|
||
| 3. **Sustainability**: | ||
| - Optimize AI models for computational efficiency | ||
| - Consider environmental impact of training and inference | ||
| - Balance model complexity with resource usage | ||
|
|
||
| ## Compliance | ||
|
|
||
| 1. **Regulatory Adherence**: | ||
|
|
||
| - Ensure AI systems comply with financial regulations | ||
| - Maintain documentation for regulatory review | ||
| - Implement controls for regulated AI use cases | ||
|
|
||
| **Regulatory References:** | ||
|
|
||
| - [SEC 17 CFR § 275.204-2(a)(16)](https://www.ecfr.gov/current/title-17/section-275.204-2) - Books and records to be maintained by investment advisers | ||
| - [GDPR Article 22](https://gdpr-info.eu/art-22-gdpr/) - Automated individual decision-making, including profiling | ||
| - [FINRA Regulatory Notice 21-25](https://www.finra.org/rules-guidance/notices/21-25) - Artificial Intelligence in the Securities Industry | ||
| - [Federal Reserve SR Letter 11-7](https://www.federalreserve.gov/supervisionreg/srletters/sr1107.htm) - Guidance on Model Risk Management | ||
|
|
||
| 2. **Data Governance**: | ||
|
|
||
| - Establish clear data retention policies for AI training | ||
| - Document data lineage for AI models | ||
| - Implement data minimization practices | ||
|
|
||
| **Regulatory References:** | ||
|
|
||
| - [GDPR Article 5](https://gdpr-info.eu/art-5-gdpr/) - Principles relating to processing of personal data | ||
| - [CCPA §1798.100](https://leginfo.legislature.ca.gov/faces/codes_displaySection.xhtml?lawCode=CIV§ionNum=1798.100) - Consumer right to know | ||
| - [NY DFS 23 NYCRR 500](https://www.dfs.ny.gov/industry_guidance/cybersecurity) - Cybersecurity Requirements for Financial Services Companies | ||
|
|
||
| ## Updates to These Guidelines | ||
|
|
||
| These guidelines will be reviewed and updated quarterly to reflect new AI capabilities, regulatory changes, and industry best practices. All team members are encouraged to contribute suggestions for improvements. | ||
|
|
||
| > **Note:** The "Last Updated" date at the bottom of this document is automatically maintained by a GitHub Actions workflow whenever changes are pushed to this file. Please do not modify this date manually. | ||
|
|
||
| Last Updated: May 11, 2025 | ||
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,6 @@ | ||
| { | ||
| "currentFile": { "maxTokens": 20000 }, | ||
| "diff": { "maxTokens": 10000 }, | ||
| "terminal": { "maxLines": 1000 }, | ||
| "folder": { "maxDepth": 4, "maxFiles": 30, "maxFileSize": 80000 } | ||
| } |
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Verify regulatory references are current and applicable.
The regulatory references cited should be verified to ensure they are current and applicable to the financial services provided by VeritasVault.net.
🌐 Web query:
💡 Result:
Based on the search results, the regulatory landscape for AI in financial services in 2025 appears more complex than the specific regulations mentioned in your query. While some of these regulations remain relevant, there have been significant developments in how regulators approach AI governance.
Current Regulatory Framework for AI in Financial Services (2025)
The regulatory approach to AI in financial services as of May 2025 is characterized by a combination of existing frameworks being applied to new technologies, rather than a single comprehensive AI regulation. Despite the focus on AI by U.S. financial regulators, the SEC, CFTC, and FINRA have not yet issued new regulations specifically addressing AI use[5].
FINRA's Current Approach
FINRA has emphasized a technology-neutral regulatory approach, meaning that existing rules apply to AI just as they would to any other technology. In January 2025, FINRA published its Annual Regulatory Oversight Report which highlighted several AI-related regulatory risks[1][5]. This report addressed:
FINRA's guidance advises firms to identify risks related to AI accuracy or bias, mitigate cybersecurity risks, and implement robust cybersecurity programs to combat AI-driven threats[5].
SEC Recordkeeping Requirements
Regarding SEC Rule 17 CFR § 275.204-2(a)(16), the search results indicate that various SEC recordkeeping provisions may be implicated by AI use. In 2023, the SEC proposed amendments to Exchange Act Rules 17a-3 and 17a-4 and Advisers Act Rule 204-2 to require broker-dealers and investment advisers to maintain books and records related to the requirements of proposed conflict rules for AI use[8]. However, the search results do not specifically confirm whether these proposals have been finalized or implemented as of May 2025.
Federal Reserve Guidance
The search results mention Federal Reserve SR Letter 11-7 on Supervisory Guidance on Model Risk Management in the context of AI cybersecurity risk management[7]. This suggests that this guidance remains relevant in 2025, though it may be supplemented by newer frameworks.
Recent Regulatory Developments
Executive Order and Policy Shifts
A significant development was the Trump administration's January 23, 2025, executive order on AI, which directs the development of an action plan to "sustain and enhance America's global AI dominance"[5]. This signals a potential shift in the regulatory approach compared to the previous administration.
FINRA Forward Initiative
On April 21, 2025, FINRA unveiled "FINRA Forward," a broad review of its rules and regulatory framework intended to modernize existing regulations[10]. This initiative includes:
Treasury Department Guidance
The Treasury Department has also been active in providing guidance on AI in financial services. A report on "Managing Artificial Intelligence-Specific Cybersecurity Risks in the Financial Services Sector" was issued in response to Executive Order 14110[7]. This report provides best practices for financial institutions to manage AI-specific cybersecurity risks.
In summary, while SEC 17 CFR § 275.204-2(a)(16), FINRA Regulatory Notice 21-25, and Federal Reserve SR Letter 11-7 remain part of the regulatory framework, they do not represent the complete or most current standards for AI in financial services as of May 2025. The regulatory landscape continues to evolve, with FINRA's technology-neutral approach, ongoing SEC proposals, Treasury Department guidance, and the new executive order all shaping how financial institutions should approach AI governance and compliance.
Citations:
Update AI Regulatory References for 2025
The four citations below remain relevant but no longer reflect the full AI governance landscape. Please expand this list in
.aiguidance/ai-guidelines.md(lines 363–367) to include recent U.S. financial-services AI developments:Adjust the snippet accordingly to ensure VeritasVault.net’s AI guidelines align with the current regulatory framework.
🤖 Prompt for AI Agents