• Home

  • Promos

  • News

  • Media

News

24 June 2023
Etherscan Introduces Code Reader: The AI-Powered Tool For Ethereum Contract Analysis

Etherscan Introduces Code Reader: The AI-Powered Tool For Ethereum Contract Analysis

The aforementioned tool provides users with the capability to access and comprehend the source code of a particular contract address through the assistance of an artificial intelligence prompt.

Etherscan, the Ethereum block explorer and analytics platform, recently introduced a novel tool called "Code Reader" on June 19. This tool employs artificial intelligence to retrieve and interpret the source code of a specific contract address. Upon receiving a prompt from the user, Code Reader generates a response using OpenAI's large language model, thereby furnishing valuable insights into the contract's source code files. The tutorial page of the tool states as such:

        “To use the tool, you need a valid OpenAI API Key and sufficient OpenAI usage limits. This tool does not store your API keys.”

Code Reader's potential applications include acquiring a more profound comprehension of a contract's code through AI-generated explanations, obtaining comprehensive lists of smart contract functions associated with Ethereum data, and comprehending how the underlying contract interacts with decentralized applications. The tutorial page of the tool elaborates that upon retrieving the contract files, users can select a specific source code file to peruse. Furthermore, the source code can be modified directly within the UI before being shared with the AI.

In the midst of an AI boom, certain experts have raised concerns about the current feasibility of AI models. A report recently published by Singaporean venture capital firm Foresight Ventures states that "computing power resources will be the next big battlefield for the coming decade." However, despite the increasing demand for training large AI models using decentralized distributed computing power networks, researchers have identified significant limitations that current prototypes faces, such as complex data synchronization, network optimization, and concerns surrounding data privacy and security.

As an illustration, the aforementioned Foresight researchers highlighted that training a large model with 175 billion parameters utilizing single-precision floating-point representation would necessitate approximately 700 gigabytes. Nevertheless, distributed training mandates the frequent transmission and updating of these parameters between computing nodes. In the event of 100 computing nodes, and each node requiring updates for all parameters at every unit step, the model would require the transmission of 70 terabytes of data per second, which surpasses the capacity of most networks by a significant margin. The researchers concluded that:

       “In most scenarios, small AI models are still a more feasible choice, and should not be overlooked too early in the tide of FOMO [fear of missing out] on large models.

Recommended to read

Why Slot Machines Are Called “One-Armed Bandits” — And How They Got That Way
3ヶ月前

Why Slot Machines Are Called “One-Armed Bandits” — And How They Got That Way

Explore the colorful history and mechanics behind the iconic nickname, and how these machines evolved into digital powerhouses.

Read more
Weekly Roundup - May 4th Week
1年前

Weekly Roundup - May 4th Week

We're thrilled to announce that K8's May Giveaways have come to a close, and what a thrilling ride it was! Throughout the month, we have organized a v...

Read more
Bitcoin Struggles Over Christmas While Solana and BNB Surge
1年前

Bitcoin Struggles Over Christmas While Solana and BNB Surge

Bitcoin took a step down over Christmas weekend as analysts cited seasonal factors for its price weakness versus surging Solana and BNB.

Read more

Get K8 Airdrop update!

Join our subscribers list to get latest news and updates about our promos delivered directly to your inbox.