What fund administrators and asset managers need to know about AI washing

By Deepak Sheroan, Co-Founder and CTO
The financial services industry is experiencing an AI revolution. According to recent research, 52% of financial services professionals now claim to use generative AI, up from 40% in 2023. Even more striking, 38% report using AI for trading and portfolio optimization, compared to just 15% two years ago.
But here's the uncomfortable truth: many of these claims are exaggerated or outright false.
This phenomenon, called AI washing, poses a serious threat to fund administrators, private markets investors, and asset managers who rely on accurate information to make critical operational and investment decisions. Understanding how to identify AI washing isn't just about avoiding bad vendors; it's about protecting your firm's reputation, operational integrity, and fiduciary responsibilities.
What is AI Washing and Why Should You Care?
AI washing occurs when companies falsely or inaccurately claim to leverage artificial intelligence technologies in their business operations. According to the CFA Institute, AI washing involves "using buzzwords and marketing strategies that exaggerate the true capabilities or presence of AI in companies' business activities, leading to client and stakeholder confusion, skepticism, and potential ethical concerns."
For fund administrators and asset managers, AI washing is particularly dangerous because:
1. It Undermines Transparency and Trust
AI washing directly contradicts the principles of Explainable AI (XAI), the movement toward making AI systems transparent and understandable. When vendors exaggerate their AI capabilities, they make it harder for you to understand what's actually happening with your data and processes. This opacity creates risk in an industry where transparency is paramount.
2. It Violates Fiduciary Duties
As a fund administrator or asset manager, you have ethical obligations to your clients as outlined in industry standards like the CFA Institute Code of Ethics and Standards of Professional Conduct. Relying on AI-washed products means you cannot provide clients with accurate information about how their data is being processed or how decisions are being made.
3. It Prevents You from Identifying Real Value
AI washing makes it nearly impossible to determine whether a vendor's solution "delivers something that is actually novel and potentially adds value" or is simply "commoditized, bereft of new or useful features." You could be paying premium prices for basic automation dressed up as AI.
4. It Attracts Regulatory Scrutiny
Regulators are increasingly focused on AI washing. The CFA Institute notes that AI washing "has increasingly become the subject of heightened scrutiny from the investment community, including regulators." Partnering with vendors who engage in AI washing could expose your firm to regulatory risk.
5. It Wastes Resources and Creates Operational Risk
Implementing AI-washed solutions means investing time, money, and resources into technology that won't deliver promised results. For fund administrators managing complex workflows — capital calls, NAV calculations, investor reporting — unreliable AI creates operational risk that can cascade through your entire process.
Why Companies Engage in AI Washing: The Uncomfortable Economics
Understanding why AI washing happens helps you identify it. The CFA Institute report identifies several key motivations:
Commercial Pressure to Appear Cutting-Edge
Financial services firms face intense pressure to demonstrate they're adopting the latest technologies. Showing clients that you're using "AI-powered" solutions can be a competitive differentiator, even if the AI is superficial or non-existent.
The High Cost of Genuine AI Implementation
Real AI development requires "considerable resources in terms of technology spending to acquire the software and hardware needed to implement many sophisticated types of AI algorithms." Beyond technology, firms need specialized personnel — data scientists, ML engineers, AI researchers — who are expensive and time-consuming to hire.
Fear of Falling Behind Competitors
The report notes that "appearing inferior in terms of developing what are considered cutting-edge, novel, and potentially 'game-changing' technologies and methodologies is often perceived as a cardinal sin." This fear drives firms to exaggerate their AI capabilities rather than admit they're still using traditional methods.
The "Jenga Effect": Risk of Disrupting Working Processes
Many established firms already have successful operational processes. Implementing genuine AI means potentially disrupting what works. As the report explains, firms fear that "in attempting to advance their investment process with AI, they may end up hurting their investment process more than helping it by removing critical (but nevertheless seemingly outdated) components."
This creates a perverse incentive: claim to use AI for marketing purposes while keeping traditional processes intact.
Red Flags: How to Identify AI Washing
The CFA Institute provides a framework for detecting AI washing. Here are the most critical warning signs for fund administrators and asset managers:
Red Flag #1: Check the Personnel (The Easiest Test)
Before diving into technical questions, investigate the team supposedly building the AI. According to the report:
"If a firm's head of data science or AI is simply an individual who has worked at that firm for a long time but has scant experience and education in AI, that is a good indication that the asset manager's claims of applying AI in any material way are probably exaggerated."
What to look for:
- Does the AI/data science leader have relevant education (degrees in ML, computer science, data science)?
- Do they have experience building and deploying AI systems at scale?
- Can they discuss technical details of their implementation?
Red flag: The "AI expert" is a long-time employee who was simply rebranded from another role without relevant AI credentials.
Red Flag #2: Inability to Explain Data Sources and Preprocessing
Real AI requires substantial, high-quality data. The report emphasizes that AI tools face "often-daunting data requirements, necessary for properly training algorithms."
Questions to ask:
- What data sources train your models?
- How do you handle missing data, outliers, and data quality issues?
- What preprocessing and feature engineering techniques do you use?
- How do you standardize or normalize input features?
Red flag: Vague answers like "we use market data" or "our AI figures it out automatically" without technical specifics.
Red Flag #3: No Validation or Testing Methodology
Genuine AI requires rigorous validation to prevent overfitting and ensure models work on new data.
Questions to ask:
- Can you provide out-of-sample test results?
- What precautions do you take against overfitting?
- What mechanisms are in place to retrain models?
Red flag: Cannot provide specific validation metrics or deflects with claims that their models are "too complex to test traditionally."
Red Flag #4: Lack of Model Interpretability
For operational AI in fund administration, you need to understand why the AI made specific decisions—especially when those decisions affect investor capital or regulatory reporting.
Questions to ask:
- How do you maximize model interpretability?
- Can you explain why your AI made a specific extraction or classification?
- Can you provide concrete examples of how you communicate AI decisions to users?
Red flag: "Black box" explanations or claims that "the AI just works" without ability to show reasoning.
Red Flag #5: No Governance or Audit Process
Responsible AI requires governance structures to ensure quality, compliance, and accountability.
Questions to ask:
- What governance structures ensure responsible AI use?
- Do you have an internal AI audit process?
- How often are models reviewed for compliance?
- What happens when the AI makes an error?
Red flag: No formal governance framework or audit process for AI systems.
Red Flag #6: The "Secret Sauce" Defense
While firms legitimately protect proprietary information, the report warns that firms often "use the 'secret sauce' defense to shield them from revealing too much detail about what tools they are using, including AI tools."
What's reasonable: Protecting specific algorithm implementations or training data sources.
Red flag: Refusing to discuss validation methodology, accuracy metrics, or general approach because it's "proprietary."
Red Flag #7: Outsourcing Without Quality Assurance
Many "AI-powered" platforms are simply wrappers around third-party APIs (like GPT-4 or Claude). This isn't inherently bad, but requires validation.
Questions to ask:
- If you use third-party AI services, which components are proprietary vs. outsourced?
- What processes ensure the quality of outsourced AI?
- How do you validate third-party model performance?
- What happens if the third-party API fails or changes?
Red flag: Evasiveness about architecture or inability to explain how they validate third-party components.
How DwellFi Avoids AI Washing: Our Commitment to Transparency
At DwellFi, we build operational AI for financial services—specifically document processing, data extraction, and workflow automation for fund administrators and private markets professionals. We're not forecasting asset returns or building trading algorithms. We're solving the operational challenges that slow down your team: extracting data from subscription documents, processing capital call notices, analyzing fund documents, and automating repetitive workflows.
Because we actually build this technology, we hold ourselves to the same standards we advocate for the industry. Here's how we avoid AI washing:
1. Transparent Team Credentials
Our AI team includes specialists with backgrounds in natural language processing (NLP), computer vision, and production machine learning systems. We didn't rebrand existing engineers as "AI experts"—we hired people who've built and deployed AI at scale. We're transparent about our team's qualifications because we have nothing to hide.
2. Clear Data Requirements and Preprocessing
We're upfront about what our AI can and cannot handle:
- What we handle well: Standard PDFs, structured documents, clear scans
- What requires advanced processing: Handwritten notes, severely degraded scans (which we flag for human review or advanced processing at higher cost)
- Our approach: Multi-model consensus, adaptive OCR, table detection, and transparent handling of edge cases
We don't claim our AI "magically" handles everything. We show you exactly what preprocessing happens and where human review is needed.
3. Rigorous Validation and Accuracy Metrics
We maintain held-out test sets for each document type and report accuracy metrics:
- Standard documents: 95%+ extraction accuracy
- Complex/degraded documents: 85%+ accuracy
- Testing frequency: Monthly validation on new document formats
- Trial approach: We test on YOUR actual documents during evaluation—not curated examples
4. Built-In Explainability
Every extraction in DwellFi includes:
- Source highlighting: We show exactly where each data point came from
- Confidence scores: You know which extractions to verify
- Exception flagging: When our models are uncertain, we flag fields and explain why (e.g., "Multiple values found" or "Low OCR confidence")
Transparency isn't a feature—it's how we built the system.
5. Formal Governance and Quality Controls
We've established:
- Multi-layer quality controls: Automated validation checks, confidence-based routing to human review
- Audit logs: Complete tracking of all extractions and corrections
- Clear SLAs: Defined accuracy and response time commitments with accountability
- User feedback loops: Corrections improve our models (with appropriate privacy controls)
6. Honest About Hybrid Architecture
We use a hybrid approach:
- Proprietary models: For document-specific tasks (subscription agreement parsing, fund document analysis)
- Third-party LLMs: For general language understanding
- Validation: We test all third-party components on our datasets before integration
- Fallback systems: We don't rely on any single vendor
We're transparent about our architecture because we understand every component and can validate its performance.
7. Continuous Monitoring and Improvement
We monitor extraction accuracy in production and flag degradation automatically. When we encounter new document formats, we have a rapid response process: human review, model fine-tuning, and validation before deployment. Our models improve over time because we've built the infrastructure to learn from production use.
Frequently Asked Questions
Q: How can I tell if a vendor is using real AI or just basic automation?
Ask technical questions about their data sources, preprocessing techniques, validation methodology, and model interpretability. Real AI vendors can provide specific answers with metrics. AI-washed vendors will give vague responses or hide behind "proprietary" claims. The personnel test is also powerful: check if their AI leadership has relevant credentials and experience.
Q: Is it bad if a vendor uses third-party AI services like GPT-4?
Not necessarily. Many legitimate AI applications use third-party models as components. What matters is whether the vendor:
- Is transparent about what's proprietary vs. third-party
- Has validated the third-party components on their specific use case
- Has fallback systems if the third-party service fails
- Understands how the third-party model works and its limitations
The red flag is when a vendor is just a thin wrapper around someone else's API without adding genuine value or validation.
Q: Our current vendor claims to use AI but can't answer these technical questions. What should we do?
You have three options:
- Request a technical deep-dive: Ask for a meeting with their data science team (not just sales) to get detailed answers
- Demand proof: Request accuracy metrics on your actual documents during a trial period
- Evaluate alternatives: If they can't provide satisfactory answers, consider vendors who are transparent about their capabilities
Remember: if they're evasive about basic validation questions, that's a sign they may not have robust AI—or any AI at all.
Q: How much should genuine AI cost? Are cheap "AI-powered" tools legitimate?
Real AI development requires significant investment in data infrastructure, specialized talent, and ongoing maintenance. While pricing varies by use case, extremely cheap "AI-powered" tools (e.g., $50/month for complex document processing) are often just basic automation or thin wrappers around third-party APIs.
That said, cost alone isn't the indicator—some vendors achieve economies of scale. Focus on validation: can they prove their AI works on YOUR documents with measurable accuracy?
Q: What's the difference between AI for investment management vs. operational AI for fund administration?
The CFA Institute report focuses on investment management AI (forecasting returns, portfolio optimization, trading). Operational AI for fund administration focuses on document processing, data extraction, and workflow automation.
The questions to ask are different:
- Investment AI: "How does your model outperform benchmarks?" "What's your Sharpe ratio?"
- Operational AI: "What's your extraction accuracy?" "How do you handle edge cases?" "Can you explain why you extracted this value?"
Don't let vendors confuse the two. If you're evaluating operational AI, ask operational questions.
Q: How can fund administrators protect themselves from AI washing when selecting vendors?
Follow this framework:
- Check personnel credentials first (easiest test)
- Ask for accuracy metrics on YOUR documents (not curated examples)
- Request technical explanations of data sources, preprocessing, and validation
- Demand interpretability (can they explain their AI's decisions?)
- Review governance structures (audit processes, quality controls, error handling)
- Test during trial period with real documents and edge cases
- Get references from similar clients who've used the AI in production
Q: What should I do if I suspect our firm is already using an AI-washed product?
Conduct an internal audit:
- Review vendor claims vs. actual performance
- Ask the technical questions from this article
- Test accuracy on a sample of your documents
- Document gaps between promised and actual capabilities
- Evaluate alternatives if significant gaps exist
If you discover AI washing, you may have grounds to renegotiate contracts or switch vendors—especially if the vendor made specific performance claims they cannot substantiate.
Q: How often should we re-evaluate our AI vendors?
At minimum, annually. But also re-evaluate when:
- Your document types or workflows change significantly
- The vendor releases major updates
- You notice accuracy degradation
- Regulatory requirements change
- New vendors enter the market with better capabilities
AI technology evolves rapidly. What was cutting-edge two years ago may now be commoditized.
Demand Truth in AI
The financial services industry stands at a crossroads. AI has genuine potential to transform operational efficiency, reduce errors, and free your team from repetitive tasks. But AI washing threatens to undermine this progress by creating skepticism, wasting resources, and exposing firms to operational and regulatory risk.
As fund administrators and asset managers, you have the power to demand transparency. By asking tough questions and holding vendors accountable, you can:
- Protect your firm from operational risk and wasted investment
- Fulfill your fiduciary duties to clients and investors
- Support genuine innovation by rewarding vendors who build real AI
- Raise industry standards for transparency and accountability
At DwellFi, we believe transparency isn't optional — it's foundational. We're committed to showing our work, proving our accuracy, and being honest about what our AI can and cannot do.
Ready to see the difference between AI washing and genuine operational AI?
We invite you to test DwellFi with your actual documents, your messiest subscription agreements, your most complex fund documents, your oldest scanned files. See our extraction accuracy firsthand, review our confidence scores, and verify our output against source documents.
Because when you're actually building operational AI, transparency isn't a risk, it's how you prove the technology works.