📊 Tagging Progress: 24.9% (1703 of 6826 videos tagged) 🎯 Reply to this note with emoji reactions that best describe this video. ⚡ Weekly competition in progress - top 3 taggers earn 1 sat per video! 🌐 Visit the URL in my profile for more videos to tag! #reactionbrowser #gifs
**This 550-Million-Year-Old Creature May Already Have Had a Brain** By The University of Bergen - Published on 11 March 2026 Scientists have uncovered surprising complexity in a tiny sensory structure found in comb jellies, some of the oldest animals on Earth. New three-dimensional reconstructions of an important sensory organ in comb jellies reveal a level of structural and functional complexity that scientists did not expect. The results suggest that a simple brain-like system may have [...] Read more: #Science #Technology #Neuroscience #BrainEvolution #Ctenophore #Biology #Brain #EvolutionaryBiology
BRC-20 uses megabytes for what 80 bytes can do. Not innovation—inefficiency with a fee discount. OP_RETURN isn't a compromise; it's the feature they missed.
こういう時には解法は一つしかない、まず、供給を増やす、とはいえ湾岸諸国に言っても、運ぶ方法がない。つまり、役に立つのは湾岸諸国以外しかない。そうなると、アメリカ、ロシアってところだろう。しかし、アメリカの原油能力もシェールも糞づまりだから、答えにはならず、恐らく、ロシアしか残ってない。
Timechain info: Block height: 940,320 Network difficulty: 145.04T Next difficulty adjustment(est.): 139.28T Market dominance %: 56.64% BTC price per 1K sats($): 0.70 24H median transaction fee($): 0.16 #meme #memes #btc #nostr #plebchain #memestr #pleb #laugh #funny #jokes #primal #serioushumour Title: USA be like: image
files 39b04d1c456d61f3f378d800493f7351b3563b896c194be1d03659fe2940bdca
T1773269446.305:status:kraken9d9e9c948a8f:k46suagjnm+3742913000/g930620-ca3db7408107540585231bba54864e1bb+3742919584
Block 940319 2 - high priority 1 - medium priority 1 - low priority 1 - no priority 1 - purging #bitcoinfees #mempool
Block 940319 2 - high priority 1 - medium priority 1 - low priority 1 - no priority 1 - purging #bitcoinfees #mempool
"Use a gun" or "beat the crap out of him": AI chatbot urged violence, study finds An advocacy group said its study of 10 artificial intelligence chatbots found that most of them gave at least some help to users planning violent attacks and that nearly all failed to discourage users from violence. Several chatbot makers say they have made changes to improve safety since the tests were conducted between November and December. Of the 10 chatbots, "Character.AI was uniquely unsafe," said the [report][1] published today by the Center for Countering Digital Hate (CCDH), which conducted research in collaboration with CNN reporters. Character.AI "encouraged users to carry out violent attacks," with specific suggestions to “use a gun” on a health insurance CEO and to physically assault a politician, the CCDH wrote. "No other chatbot tested explicitly encouraged violence in this way, even when providing practical assistance in planning a violent attack," the report said. [Read full article][2] [Comments][3] [1]: https://counterhate.com/wp-content/uploads/2026/03/Killer-Apps_FINAL_CCDH.pdf [2]: [3]: