Dolphin.fm
  • 🐬Overview
    • Introduction
    • Background
    • The Vision
    • Roadmap
    • Core Components
  • 🌊Product Manual
    • Stage 1: Knowledge Discovery Trading Engine
    • Onboarding Wizard
    • Knowledge Discovery Trading Engine(KDTE)
    • One-Click Trading
    • One-Click Investment
      • Dual Investment with Downside Protection
      • Single Asset Yield Farming with Impermanent Loss Protection
    • Stage 2: Agent Engine and Ecosystem
  • πŸ’‘Technology
    • System Architecture
      • Online Service
      • Knowledge Service
      • AI Infrastructure
      • AI Agent Infrastructure
    • Large-Language-Model Specialized in Investing
      • Domain Knowledge
      • Tabular Understanding
    • Quantitative and Machine Learning Models
      • Main Strategy for Hedging Impermanent Loss
      • Time Selection and Loss & Rebalance Strategy
      • Volatility Predictions
  • πŸ’ŽTOKENOMICS
    • Introduction to $DOLFM
    • $DOLFM Token Utilities
    • Quantitative Token Self-regulating Mechanism
      • Buyback & Burn Mechanism
      • Revenue-based Minting Algorithm
      • veToken Model
Powered by GitBook
On this page
  1. Technology
  2. Large-Language-Model Specialized in Investing

Tabular Understanding

PreviousDomain KnowledgeNextQuantitative and Machine Learning Models

Last updated 11 months ago

Financial data is often presented in tabular formats, requiring the model to accurately interpret and analyze tables. This capability is essential for providing meaningful insights and actionable advice based on numerical and structured data. Our approach includes:

  • Specialized Pre-Training: Training the model on a diverse range of financial tables and datasets to enhance its ability to understand and manipulate tabular data.

  • Table Parsing Algorithms: Implementing sophisticated algorithms that enable the model to parse, extract, and interpret information from tables accurately.

  • Integration with Analytical Tools: Combining the model’s outputs with advanced analytical tools to provide comprehensive insights and facilitate data-driven decision-making.

There are other important aspects of our LLMs, which include the following: Our LLM is based on Engineered Chat-GPT4o, which enables GPT-powered avatars to create personalized avatars for creators. Specialty fine-tuning is a critical component, where our in-house ex-quant team provides specialized toolsets and fine-tuned models, empowering creators to build reliable DeFi expert avatars. To further improve content quality, we employ architecture optimization techniques such as Corrective Retrieval-Augmented Generation (cRAG), which builds avatars based on creators' content, increasing both relevance and factuality.

Reference:

  • RAFT: adapting language models to enhance response on web3 domain knowledge.

πŸ’‘
Corrective Retrieval Augmented Generation
RAFT: Adapting Language Model to Domain Specific RAG