If they’re going to be hostile and create a toxic environment from the start, then I’m out. A bunch of people who don’t even have a ChatGPT Plus subscription are getting together to do an “AI practice” course. Besides talking about some classic deep learning and reading papers published after the Transformer, I don’t know what else they could possibly teach. If I want to learn something, I’ll obviously ask an AI. Good prompt engineering is like having two PhD candidates taking turns to serve me.
For those who are not aspiring to do research in NLP / CV / RL, the core task isn’t to work on obscure academic problems and become human capital by applying those findings. Instead, it’s about how to integrate AI as a tool into one’s own workflow—this is what being AI-native is all about.
The crux of the problem with most of these “practices” is that they only see AI as a patch for existing paradigms, not as a completely new mode of interaction. This is why you see companies wanting to hire people who can use AI tools, yet they ban AI in structured interviews.
The current stampede towards deep learning is simply the behavior of an “investor,” but it’s a far cry from being a true “believer” in the AI-native way. And as I’ve pointed out in the past, this kind of rush is often irrational. A little understanding of the consumer-side situation and community practices would temper such fanaticism.