Grokipedia: A first look

Preview

To begin with my credentials for those who arrive here not knowing who I am: I've started, or helped start, five encyclopedias and meta-encyclopedia projects, including Wikipedia.((I founded Nupedia and Wikipedia, advised the design and launch of …

Grokipedia: A first look

Link: https://larrysanger.org/2025/10/grokipedia-a-first-look/

Context

I really like the post. It was a balanced take on the new hyped LLM site. It looks good, maybe it serves a purpose, but still, that 1% of the search I am doing I want to be 100% sure, not 101% confidently wrong. We don’t know what this will do to the learning and knowledge bowl of humans, but it might have an impact, good or bad. It’s a pivotal moment in internet history. We are either full-on AI or superhumans.

It’s insulting to read your AI-generated blog post

Preview

It seems so rude and careless to make me, a person with thoughts, ideas, humor, contradictions and life experience to read something spit out by the equivale...

It’s insulting to read your AI-generated blog post

Link: https://blog.pabloecortez.com/its-insulting-to-read-your-ai-generated-blog-post/

Context

I will never ever try to use AI in writing. Becuase I want to think, otherwise what is the purpose of doing anything. If humans just hand off every bit of things to AI what is the moat of humans? What is the advantage of having a big brain. Its like having two queens on the chess board and still not able to checkmate. Skill issues.

Sabbaticals keep our attrition at bay

Preview

The only way many tech workers in the US can get a long break is by quitting their job. So lots of them do that every few years, which is partly why the average tenure in our industry is at an atrocious 18 months. But this terrible rate of churn is …

Sabbaticals keep our attrition at bay

Link: https://world.hey.com/dhh/sabbaticals-keep-our-attrition-at-bay-9ccba5c0

Context

A six week, almost one and a half month break, that is a huge one. I don’t like breaks, maybe it might be necessary in some point of time in someones life, the situation might demand it. But just because you have the option, I don’t like taking it. I think forcing yourself to work brings you the skill of consistency and trust.

Andrej Karapathy on Dwarkesh Patel Podcast

Andrej Karapathy on Dwarkesh Patel Podcast

Link: https://youtu.be/lXUZvyajciY

Context

I haven’t completed watching it but felt really excited to learn more about LLMs. I like the analogy of human brain and the LLM. When we sleep we kind of reset the context window, but update our parameters, we internalise the lessons, we can think and process in the background and connect stuff up. I also found it surprising that reaching the state of the art models with 1B parameter would take a decade or so? Kind of practical but considering the frequency of the current releases of models, it looks it could happen almost next year.

How do arrays work?

Preview

What goes on under the hood of the most popular data structure? In this post, we uncover the secrets of the array by reinventing one ourselves.

How do arrays work?

Link: https://nan-archive.vercel.app/how-arrays-work

Context

Such a sweet little blog post. It listed the naive array logic and then also gave a better and more possibilities for the reader to be curious and excited about to try.

Source: techstructive-weekly-65

I used to like software development, but not anymore

I used to like software development, but not anymore

Link: https://blog.kulman.sk/i-used-to-like-software-development-but-not-anymore/

Context

Nostalgia, I remember I started my programming journey, installing Codeblocks and PyCharm. That was some heck of a task, but the satisfaction of following bucky roberts tutorials and able to understand the stuff, was pure joy. Nowadays who needs to understand the variables, no LLM just takes care of it. The depth, the pain of uncomfortable is lost. The joy of finding stackoverflow question is lost. Its not just AI or LLMs, but people are just working a bit wired, the mindset, the systems have kind of outgrown humans to productivity myths and its rotting thier brain.

Richard Sutton on Dwarkesh Patel Podcast

Richard Sutton on Dwarkesh Patel Podcast

Link: https://youtu.be/21EYKqUsPfg

Context

It was so deep, like his thinking is so defensive and critical. Some of the points I found out to be contrasting. The start was promising but he started to shade his own points I believe. The math solving problems, which makes sense, but then evolution of human thinking, the built in parameters. The point of having intrinsic motivation is not mentioned in the conversation which makes me wonder, why it was not? It was such s distinguishing factor, but he doesn’t wants to distinguish humans, so why try to mimic humans?