Kubernetes Zero to Hero Course: Alta3 Research

Kubernetes Zero to Hero Course: Alta3 Research

Link: https://youtu.be/MTHGoGUFpvE

Context

This, is a masterpiece. I learnt everything. Like atleast touched on everything. Loved it. Want to get into it, need to leverage it and play with it. It clicks to me now, the autoscaling, security, the volume bit wow. Everything makes sense after using them and taking them for granted due to cloud run abstraction.

Source: techstructive-weekly-77

Don’t Fall into the ANTI AI HYPE

This is interesting and it comes at the right time

facts are facts, and AI is going to change programming forever

It does not matter if this or the other CEO of some unicorn is telling you something that is off putting, or absurd. Programming changed forever, anyway.

What is the social solution, then? Innovation can’t be taken back after all. I believe we should vote for governments that recognize what is happening, and are willing to support those who will remain jobless. And, the more people get fired, the more political pressure there will be to vote for those who will guarantee a certain degree of protection. But I also look forward to the good AI could bring: new progress in science, that could help lower the suffering of the human condition, which is not always happy.

AI codes better than me, now what?

AI codes better than me, now what?

Link: https://youtu.be/UrNLVip0hSA

Context

This is really changing. It can write code, better than me. That’s when I started to use it as a partner that knows a lot of things but gets overwhelmed and like a junior does a lot of things. Guiding it, reviewing it, and also understanding myself what it actually does is co critical.

Source: techstructive-weekly-76

AI should be free software

Preview

Why free software is a surprisingly important component of good futures with powerful AI

AI should be free software

Link: https://substack.com/inbox/post/183934559

Context

Yikes, this looks like a good take on LLMs being free and open weight. If not, the larger AI labs might offer ads into the LLM suggestions. This, just the thought of it makes me wiggle with fear. It might push us in wired directions. The point of drawing a line of “our goal” vs “model’s goal” becomes hazy and it just doesn’t align with human values. Its a pretty hard problem to solve if it goes in a bad direction, which it seems to be at the moment.