I asked chatGPT what HTML was. It said "it tells the browser what's a heading, a paragraph, a link, an image, a table, etc." For my taste, this answer wasn't quite accurate enough. To be fair, I wouldn't necessarily expect any better from a teacher in a classroom. However, I think this misses the point that form is function. And I find issues in answers like this prevalent in chatGPT. I think it is more accurate to say that HTML tags information, which the web browser can use to format the information. I do not like the word "tell" in chatGPT's answer, because the HTML is just sitting there and then the browser acts on it. It's not always a good idea to assign intention or anthropomorphize software. I also think it is quite bad when assumptions are too far embedded in things that don't assume. In the case of HTML, a header tag does not specify something to look like a header. It simply specificies that the information will be formatted according to however the web browser displayes headers. To be clear, "header" in HTML is a string of letters, which is different than the idea of what a header is in common usage.
I find this trend common in chatGPT, and many human generated answers. I believe there is far too much emphasis on trying to anthropomorphize ideas and objects. I believe it is an Einstein quote to "make things as simple as possible, but not simpler." While metaphor and analogy are often used to help explain things, it will never be as accurate as addressing things as they are. And I think it is usually not a good idea to trade off accuracy for a temporary sense of understanding.
I told Claude about an idea of mine for a website with instruction that it would also be a learning experience for me into website design, software development, and especially using agentic AI. It spit out a plan and I talked to chatGPT about any questions, such as what HTML, CSS, and Javascript are, as well as the tools to use to create a website.
I'm working in VScode now, editing HTML directly, and using Github Copilot to take care of all the formatting. I have purchased a domain, luckily my name was available. I bought it on squarespace and am hosting in on Vercel. I suppose Vercel has some free hobby tier of hosting as I have not paid for this. I appreciate it. Additionally, VScode is free, as well as limited use of Github Copilot. So everything has been quite fun and simple so far. In copilot I said I wanted a tab to a blog and a post. Simple as that. By all means, basic familiarity with software is still helpful. But the learning curve has changed so much.
I think it is similar to the most modern ideas of learning languages. It used to be commonly thought that children learned languages super well and there was some language learning window. And, outside of this window, suddenly methods of language learning must change. To my understanding, there was never a rigorous experiment testing this idea. And I do believe it is incredibly assumptive. However, I will say there is such thing as neuroplasticity, but connecting the two in such a way is a bit heavy handed.
Simply put, babies and children essentially have multiple years of multiple tutors teaching them the language in complete and total immersion. By immersion I mean not just watching native TV or the similar, but rather the real pairing of sensation and sound. For example, when a baby experiences the cold, whether an ice cube or the winter, the parent says "cold", "this is cold", "brrrr", and the parent will point and motion and sound it out and repeat it many times. This is far different than what is termed "transfer learning", where a student who knows the word cold associates the word cold with the word for cold in another language, while never experiening the tutoring that the baby received. The point is, if you were to take an adult and treat them exactly as a newborn, I believe they would match the speaking and reading levels of their real baby peers. When put in these terms, I think the language learning window idea does becomes silly. For example, a native speaker who is 5 years old has had 5 years of constant immersion, and will still speak like a 5 year old. I would expect a 30 year old, non-native speaker, immersed side by side at the 5 year olds birth would be a better speaker by the time they reach 35.
I have not done this experiment and I don't know of anyone who plans on doing it. But, I hope it does make clear just how much of assumption it is to imagine that there is some innate advantage that children have in language learning. The point is, there are some times when learning things in some technical way is not as effective as simply experincing them, and I think agentic AI will help with this experiencing. For example, having copilot generate this simple HTML page that I am now editing is probably more effective than if I had written my own from scratch. However, this will not always be the case, because, I have written earlier, I think chatGPT somestimes brushes over technical details in favor of pretend understanding, which is probably simply a result of its training data perhaps.
I should clarify just what makes good display of information. I think a big part of it is boundaries. I find it critical to know not what something can do, but rather the limits of what it can do. This, and a focus on function simply being the result of form, and third, to not anthropomorphize things with intention or will. Additionally, I'll give my favorite tip on powerpoint presentations: everything that is talked about must be on the slide, and everything that is on the slide must be talked about.
The takeaway, agentic AI is quite fun. I envision a world where there are more personal websites that do not use pre-built architectures and formatting. Sort of like baking one's own bread. It may not be as good as a loaf from a bakery, but it can be a lot of fun if you have the time and resources.