Jingles's avatar
Jingles
_@jingles.dev
npub1alph...6dsn
Building on #NOSTR Applying #AI in #neuroscience | Satcom - social layer for internet https://satcom.app/ | W3 - URL shortener https://w3.do
Jingles's avatar
jingles 1 year ago
https://w3.do/ FOR DEVELOPERS You're likely familiar lengthy URLs that users share. Fret not! There's a solution with just one RESTful API call. To shorten URLs, simply make a request to the following endpoint: https://w3.do/get?url=<naddr1...here> For instance, consider this URL: You can shorten it by sending a GET request to: https://w3.do/get?url=naddr1qq2kw52htue8wez8wd9nj36pwucyx33hwsmrgq3qyzvxlwp7wawed5vgefwfmugvumtp8c8t0etk3g8sky4n0ndvyxesxpqqqp65w6998qf This will provide you with a new, shortened URL: https://w3.do/sFdxS4TC Happy coding! https://w3.do/
Jingles's avatar
jingles 1 year ago
W3.DO - NOSTR BASED URL SHORTENER (https://w3.do/) UPDATE NOSTR NIP-19 redirects njump.me Using a bech32 formatted string according to NIP-19 as input will generate a shortened URL that redirects to njump.me. This includes: `nevent1`, `note1`, `npub1`, `nprofile1`, `naddr1`. For instance, when the input `npub1al....6dsn` is provided, it generates a shortened URL, https://w3.do/ig2FmnQY which redirects to https://njump.me/npub1alpha9l6f7kk08jxfdaxrpqqnd7vwcz6e6cvtattgexjhxr2vrcqk86dsn https://w3.do/
Jingles's avatar
jingles 1 year ago
W3.DO - NOSTR BASED URL SHORTENER (https://w3.do/) UPDATE NOSTR NIP-19 redirects njump.me Using a bech32 formatted string according to NIP-19 as input will generate a shortened URL that redirects to njump.me. This includes: `nevent1`, `note1`, `npub1`, `nprofile1`, `naddr1`. For instance, when the input `npub1alpha9l6f7kk08jxfdaxrpqqnd7vwcz6e6cvtattgexjhxr2vrcqk86dsn` is provided, it generates a shortened URL, https://w3.do/ig2FmnQY, which redirects to https://njump.me/npub1alpha9l6f7kk08jxfdaxrpqqnd7vwcz6e6cvtattgexjhxr2vrcqk86dsn. https://w3.do/
Jingles's avatar
jingles 1 year ago
On-device generative machine learning models are coming to Macs. Apple managed to squeeze a large language model into a MacBook Pro M2, it runs 25 times faster and models 2x larger, by storing the models in flash memory, made possible on M chips. https://arxiv.org/pdf/2312.11514.pdf