FinFetcher logo

A Local Market Data Server for Algotraders

From zero to having data-pipeline setup in 30 seconds – FinFetcher takes care of fetching, storing, indexing, updating and exposing the data.

Download now

FinFetcher is currently free to use during the beta!

The UI of FinFetcher The UI of FinFetcher

Pricing

$9.99/month

Currently Free

FinFetcher has just launched and is free during the beta.

During beta, expect frequent updates and new features rolling out regularly. Starting free lets users explore FinFetcher with zero commitment.

Why is a tool like FinFetcher useful

Hitting remote APIs directly is fine for simple uses, but algotraders need to iterate fast over large slices of data and run niche queries. Hitting a provider over the network is slow, fragile, and rate-limited.

So most algotraders build a local cache, which means writing boilerplate for fetching, storing, indexing, and keeping data updated – this steals time from building the actual trading logic.

FinFetcher generalizes that cache and solves the problem once and for all. Add your API keys, pick fetchers, and it runs locally: fetches, stores, maintains indexes, keeps data fresh, and serves it on localhost (JSON/CSV or directly SQLite). Not a data provider, it runs on your machine using your API keys.

Bragging list: It works locally or on a server, it supports multiple API providers, it gives you an optional GUI, it generates exact chatbot documentation based on your config, it abstracts away all difficulty but keeps much control, is a performantly written multithreaded binary, lets you get data from any programming language, and optionally with zero overhead.

Add fetchers with just a few clicks.

The dead easy FinFetcher model

Just start the binary and fetch data from port 3055 (or configured port)

A diagram of how FinFetcher works

Problems FinFetcher handles so you don't have to

1
Have I handled all needed HTTP status codes to recover from errors?
429 backoff5xx retriesThrottles
2
How should the data be stored so that it can be received and queried fast?
database selectionaccessing bridgeoptimizations
3
How should I set up a runtime so that all types of data are always up to date? And what's considered up to date depends on the fetchers settings, like bar interval.
schedulersstalenessconcurrency
4
Some data depends on others (ticker bars use ticker list, crypto bars use crypto pairs etc). How can I ensure dependencies are solved before their consumers?
graphdependencies
5
Build a state machine to start, pause, cancel, and update jobs individually, and deciding if it affects their dependencies.
job controlstate managementcascading updates
6
What indexing strategy enables fast queries while preserving good insert performance?
indexescomposite index
7
When running remotely, how do I build functionality to monitor what's happening with the data?
metricslogsalerts
8
With API keys paid for some data but not others (e.g., stocks vs options), creating rate-limit differences, how to avoid bottlenecks for fast categories?
rate limitingqueue managementadaptive throttling
9
Multiple API providers have different endpoints and protocols, how to handle them in a general way?
adaptersschemasvalidationgenerics
10
If I build a local cache, how should my algotrading program access it fast enough for a hot loop?
low level optimizationCSV/JSON HTTPmultithreading

Super charged with the feature of not needing to think

If you have any questions how to use FinFetcher, even about your specific configuration, just click this button in the GUI.

This button above ↑ is a placeholder for how it looks like in the app.

With this, you copy a generated documentation that you should give to a chatbot for help. By doing this, the chatbot will have all information to solve any issue you might have.

"Given this generated documentation, give me the exact code to fetch stock bars in Python." (or any other language)

"Given this generated documentation, do I have correct indexes to efficiently get X?"

"Given this generated documentation, how can I read the data with zero overhead by accessing SQLite directly?"

"Given this generated documentation, how does it work when I move my algotrading program to a remote server?"

The generated docs are optimized for ChatGPT, Claude, and Gemini.

Built to be used with ease remotely

Just expose the port where the GUI is and you can access the entire dashboard, no need for cumbersome terminal handling.

A diagram of how FinFetcher works

Video showing how logs can be inspected in the GUI. It also shows how you can set the log level all the way to "trace" to see every action.

Step-by-step tutorial of how to get started

System requirements: a somewhat modern version of Windows, macOS, or Linux

1. Download the FinFetcher binary and the base config finfetcher-config.json (bottom of this page) and place them in the same folder.

2. Run the binary (double-click or from Terminal/CMD).

3. Open the GUI at http://localhost:3055 (or the port printed in the console).

4. In the GUI, go to Set API Keys and enter your provider key(s) and the FinFetcher key, for now use "beta".

5. Add your Fetchers, set their options, then use your data via http://localhost:3055/query or SQLite directly. For quick help, click Get generated documentation for chatbots and paste it into your chatbot.

The terminal output of starting FinFetcher

The terminal output of starting FinFetcher.

General ambitions with FinFetcher

FinFetcher is written to be as performant as possible while still being a general local cache tool for as many algotraders as possible. You should expect to be able to store multiple hundreds of millions of rows per table and read from them quickly. FinFetcher is written in Go but parts of the program, like encoding JSON and CSV for the serving, is written in Rust to absolutely maximize the performance when working against FinFetcher in your algotrading program. FinFetcher uses SQLite which is judged to offer the best tradeoff in terms of performance, ease of packaging and versatility. The program is written in layers, so the database layer can easily be adjusted without affecting the rest of the application. With this, if the algotrading community would wish to have a Postgres or InfluxDB version of FinFetcher instead, such change would absolutely be doable but would offer another set of tradeoffs. SQLite is, despite its name, very powerful.

For the first beta release, FinFetcher only supports fetchers that use Polygon.io. FinFetcher is built from the ground up to be completely general in what it fetches and from where, so there is no problem with adding additional fetching sources. There are two reasons for why only starting with Polygon.io: 1. By getting feedback early, possible changes are easier to do on a smaller set of fetchers. 2. To actually ship fully tested fetchers for all providers, I will need to buy the subscriptions to use those providers, by checking the interest for FinFetcher early, I can eliminate making big investments before any validation.

I am a programming enthusiast and have spent many thousands of hours building different projects. A big inspiration for this project has been PocketBase for its architecture and high quality product development. FinFetcher will not be for all algotraders but I hope a significant group recognizes the help to reduce the annoying boilerplate before actually working on the algorithm. I think FinFetcher is a very good tool for anyone who wants historic and semi live data (up to 1 second latency). This can be used for backtesting or general model engineering. The beauty with FinFetcher is that once setup, you can completely transform your data pipeline setup when jumping between projects. Just remove the fetchers you don’t want anymore, and add the ones you now want. Going from a stock pipeline to a options pipeline in 10 seconds.

Downloads (v0.0.1)

Only tested on MacOS so far

FinFetcher is currently compiled for Windows, MacOS (Apple silicon and Intel), and Linux (amd64 and ARM)

SHA256 checksums

Download base config file

FinFetcher is developed by Hoverest AB (5594118548)

Contact person for any type of issue is Hugo Olsson (hugo.contact01@gmail.com)