We can establish a holistic view of any ERC20 token using our logic and data collection framework. Our process begins with the inception of the token, where we track the issuance to obtain complete price information. This is completed with the help of external API data.
In addition to this, we also meticulously extract transfer events from Ethereum blocks to fetch the token balances of each holder. This data is then used to derive the Concentration Index, power our labeling engine, and identify Elite Entity Cohorts.
We put major emphasis on carrying out this data collection process with extreme precision. In order to deliver world-class insights, our data sets are distilled by excluding unqualified addresses such as contracts and CEXs. This way, our secondary data points produce the most accurate results.
In order to update our aggregated on-chain data quickly and keep our data lightweight, our database includes the following principles:
- 1.Simple data models: This ensures our database can be easily expanded in the future.
- 2.Eliminate repeated info: Duplicated information makes our database inefficient and leads to more errors.
- 3.Uniform labeling: Consistent language helps reduce confusion, shortening the learning curve required to understand our data.
We track all transfer events of ERC20 tokens beginning at its issuance, using external API to ensure we obtain complete price information.
In the near future, we plan to expand our database by including support for other EVM chains such as BNB Chain, Avalanche, Cardano, etc.
Following that, we plan to expand to non-EVM public chains as well, such as Aptos and Sui.
In order to accurately produce first-class market indicators like Concentration Index and Elite Entity Cohorts, we track every ounce of data related to a token.
In the following sections, you'll find detailed technical explanations of how we carry out this process.