dcSpark CTO Explains Why Cardano Is ‘One of the Worst Blockchains for Storing Data’
August 13, 2022On Saturday (August 13), Sebastien Guillemot, the CTO of blockchain company dcSpark, said that L1 blockchain Cardano ($ADA) is “definitely one of the worst blockchains for storing data”, and proceeded to explain why he thinks so.
In case you are wondering what dcSpark does, according to its development team, the main goals are to:
- “Extend Blockchain Protocol Layers”
- “Implement First-Class Ecosystem Tooling”
- “Develop and Release User-Facing Apps”
The firm was co-founded in April 2021 by Nicolas Arqueros, Sebastien Guillemot, and Robert Kornacki. dcSpark is best-known in the Cardano community for its sidechain project Milkomeda.
On Friday (August 12), one Cardano advocate sent out a tweet that made it sound like Cardano is a great blockchain for storing large amounts of data on chain.
However, the dcSpark CTO replied that Cardano’s current design makes it one of the worst blockchains for storing data :
“Really strange tweet. Cardano is definitely one of the worst blockchains for storing data and this was an explicit design decision to avoid blockchain bloat and it’s the root cause of many design decisions like plutus data 64-byte chunks, off-chain pool & token registry, etc…
Vasil improve this with inline datums, but they are indirectly discouraged because of the large cost of using them. I do agree that having the blockchain provide data availability is an important feature, but having a good solution will require changes to the existing protocol.“
Then, another $ADA holder asked Guillemot if this design decision could make life harder for team building roll-up solutions(such as Orbis), and he received the following reply:
“Yes, trying to provide data availability for use cases like rollups, mithril, input endorsers and other similar data-heavy use-cases while keeping the L1 slim (unlike Ethereum which optimizes for people just dumping data) is one of the large technical challenges being tackled“
On August 1, IOG Co-Founder and CEO Charles Hoskinson released a short video, in which he explained why the Vasil hard fork had been delayed for a second time and provided an status update regarding the testing of the Vasil protocol update.
Hoskinson said:
“Originally, we planned to have the hard fork with 1.35, and that’s what we shipped to the testnet. The testnet was hard forked under it. And then a lot of testing, both internal and community, were underway. A collection of bugs were found: three separate bugs that resulted in three new versions of the software. And now, we have 1.35.3, which looks like it is going to be the version that will survive the hard fork and upgrade to Vasil.
“There’s a big retrospective that will be done. The long short is that the ECDSA primitives and amongst a few other things are not quite where they need to be. And so, that feature has to be put aside, but all of the remaining features, CIP 31, 32, 33, 40 and other such things are pretty good.
“So those are in advanced stages of testing, and then a lot of downstream components have to be tested, like DB Sync and the serialisation library, and these other things. And that’s currently underway. And a lot of testing is underway. As I mentioned before, this is the most complicated upgrade to Cardano in its history because it includes both changes to the programming language Plutus plus changes to the consensus protocol and a litany of other things, and was a very loaded release. It had a lot in it, and as a result, it’s one that everybody had a vested interest in thoroughly testing.
“The problem is that every time something is discovered, you have to fix that, but then you have to verify the fix and go back through the entire testing pipeline. So you get to a situation where you’re feature-complete, but then you have to test and when you test, you may discover something, and then you have to repair that. And then you have to go back through the entire testing pipeline. So this is what causes release delays…
“I was really hoping to get it out in July, but you can’t do it when you have a bug, especially one that is involved with consensus or serialisation or related to a particular issue with transactions. Just have to clear it, and that’s just the way it goes. All things considered though, things are moving in the right direction, steadily and systematically…
“The set of things that could go wrong have gotten so small, and now we’re kind of in the final stages of of testing in that respect. So unless anything new is discovered, I don’t anticipate that we’ll have any further delays, and it’s just getting people upgraded…
“And hopefully, we should have some positive news soon as we get deeper into August. And the other side of it is that no issues been discovered with pipelining, no issues of been discovered with CIP 31, 32, 33 or 40 throughout this entire process, which is very positive news as well, and given that they’ve been repeatedly tested internally and externally by developers QA firms and our engineers, that means there’s a pretty good probability that those features are bulletproof and tight. So just some edge cases to resolve, and hopefully we’ll be able to come with a mid-month update with more news.“
https://youtube.com/watch?v=Na09S56FwuY%3Ffeature%3Doembed
Image Credit
Featured Image via Pixabay
Source: Read Full Article