From zero to having data-pipeline setup in 30 seconds – FinFetcher takes
care of fetching, storing, indexing,
updating and exposing the data.
FinFetcher is currently free to use during the beta!
Pricing
FinFetcher has just launched and is free during the beta.
During beta, expect frequent updates and new features rolling out regularly. Starting free
lets users explore
FinFetcher with zero commitment.
Hitting remote APIs directly is fine for simple uses, but algotraders need to iterate fast over large slices of data and run niche
queries. Hitting a provider over the network is slow, fragile, and rate-limited.
So most algotraders build a local cache, which means writing boilerplate for fetching, storing, indexing, and keeping data updated – this
steals time from building the actual trading logic.
FinFetcher generalizes that cache and solves the problem once and for all. Add your API keys, pick fetchers, and it runs locally: fetches,
stores, maintains indexes, keeps data fresh, and serves it on localhost (JSON/CSV or directly SQLite). Not a data provider, it runs on your
machine using your API keys.
Bragging list: It works locally or on a server, it supports multiple API providers, it gives you an optional GUI, it generates exact chatbot documentation based on your config, it abstracts away all difficulty but keeps much control, is a performantly written multithreaded binary, lets you get data from any programming language, and optionally with zero overhead.
Add fetchers with just a few clicks.
Just start the binary and fetch data from port 3055 (or configured port)
If you have any questions how to use FinFetcher, even about your specific configuration, just click this button in the GUI.
This button above ↑ is a placeholder for how it looks like in the app.
With this, you copy a generated documentation that you should give to a chatbot for help. By doing this, the chatbot will have all information to solve any issue you might have.
"Given this generated documentation, give me the exact code to fetch stock bars in Python." (or any other language)
"Given this generated documentation, do I have correct indexes to efficiently get X?"
"Given this generated documentation, how can I read the data with zero overhead by accessing SQLite directly?"
"Given this generated documentation, how does it work when I move my algotrading program to a remote server?"
The generated docs are optimized for ChatGPT, Claude, and Gemini.
Just expose the port where the GUI is and you can access the entire dashboard,
no need for cumbersome terminal
handling.
Video showing how logs can be inspected in the GUI. It also shows how you can set the log level all the way to "trace" to see every action.
1. Download the FinFetcher binary and the base config finfetcher-config.json
(bottom of this page) and place them in the
same folder.
2. Run the binary (double-click or from Terminal/CMD).
3. Open the GUI at http://localhost:3055
(or the port printed in the console).
4. In the GUI, go to Set API Keys and enter your provider key(s) and the FinFetcher key, for now
use "beta"
.
5. Add your Fetchers, set their options, then use your data via http://localhost:3055/query
or SQLite directly. For quick help, click Get generated documentation for chatbots and paste it into
your chatbot.
The terminal output of starting FinFetcher.
FinFetcher is written to be as performant as possible while still being a general local cache tool for as many algotraders as possible.
You should expect to be able to store multiple hundreds of millions of rows per table and read from them quickly. FinFetcher is written
in Go but parts of the program, like encoding JSON and CSV for the serving, is written in Rust to absolutely maximize the performance
when working against FinFetcher in your algotrading program. FinFetcher uses SQLite which is judged to offer the best tradeoff in terms
of performance, ease of packaging and versatility. The program is written in layers, so the database layer can easily be adjusted
without affecting the rest of the application. With this, if the algotrading community would wish to have a Postgres or InfluxDB version
of FinFetcher instead, such change would absolutely be doable but would offer another set of tradeoffs. SQLite is, despite its name,
very powerful.
For the first beta release, FinFetcher only supports fetchers that use Polygon.io. FinFetcher is built from the ground up to be completely
general in what it fetches and from where, so there is no problem with adding additional fetching sources. There are two reasons for why
only starting with Polygon.io: 1. By getting feedback early, possible changes are easier to do on a smaller set of fetchers. 2. To actually
ship fully tested fetchers for all providers, I will need to buy the subscriptions to use those providers, by checking the interest for FinFetcher
early, I can eliminate making big investments before any validation.
I am a programming enthusiast and have spent many thousands of hours building different projects. A big inspiration for this project has
been PocketBase for its architecture and high quality product development. FinFetcher will not be for all algotraders but I hope a significant
group recognizes the help to reduce the annoying boilerplate before actually working on the algorithm. I think FinFetcher is a very good
tool for anyone who wants historic and semi live data (up to 1 second latency). This can be used for backtesting or general model engineering.
The beauty with FinFetcher is that once setup, you can completely transform your data pipeline setup when jumping between projects. Just
remove the fetchers you don’t want anymore, and add the ones you now want. Going from a stock pipeline to a options pipeline in 10 seconds.
Only tested on MacOS so far
FinFetcher is currently compiled for Windows, MacOS (Apple silicon and Intel), and Linux (amd64 and ARM)
Windows (64-bit)
SHA256 checksums
Download base config file
FinFetcher is developed by Hoverest AB (5594118548)
Contact person for any type of issue is Hugo Olsson (hugo.contact01@gmail.com)