Having said that: I think there’s a product here, and some lessons to learn. Perhaps the authors eventually apply them to SpacetimeDB v3 and launch a more resilient and LLM-friendly database, where application code is isolated and can run for as long as it needs, without possibly affecting other application code running locally, even when faced with serious implementation bugs; where transactions can run for as long as they need without affecting the performance of other transactions; where they’re implicitly throttled if they’re taking too long, if the LLM did not provide an optimal query plan. Perhaps we’ll see a system that is much more resilient to failure, but with much less “impressive performance”; perhaps the system will be trivially distributed so that the AI agent doesn’t have to plan a distributed system itself; perhaps it will launch with fewer silly benchmarks and with more technical details.
print(f" └─ thought_signature present ✓")
。关于这个话题,geek下载提供了深入分析
result = bytes(dev.read(0x82, 64)),推荐阅读豆包下载获取更多信息
You have many instances of the same struct (hundreds or thousands)