• 21 Posts
  • 460 Comments
Joined 1 year ago
cake
Cake day: August 13th, 2023

help-circle



  • superb sneer in the YouTube comments

    @monx 36:10 Mikey hints at the central contradiction of his vision. love that he sees music as more than background noise. But this kind of “interchangeable commodity” is exactly what generative Al produces. It’s similar to a fantasy where you’ll type 'painting of a beautiful sunset" into a machine and then have the output hung in the Guggenheim, as if you made any meaningful contribution, as if the interpolation of training data (infinite in supply) would be special and meaningful to people. We will not escape the tautology that generative art is cheap. This can only be resolved by adding more and more degrees of creative control to the input - controls that demand more skill from the operator - until, finally, we arrive back where we started.

    would love to read more from this person




  • Another thing I’ve found actually works pretty well is setting up two computers next to each other with ChatGPT voice mode, if you give them custom instructions to be sure to wait for the other one to be done talking, they don’t interrupt each other and can get quite a bit of work done. Here is just a video of the mvp that I sent to a friend ages ago once I started playing with the idea: https://s.h4x.club/kpuzNkNL - I actually use this method of working quite often now, couple times a week at least, I find it’s pretty helpful. If I knew how to put 4/5 modals together in one app and give them each custom instructions, I’d love to try building a team (if someone out there actually knows how to build this kinda stuff, I’m happy to help flesh out how the product would need to work, but I don’t think it’s super difficult to build at this point, I’m just not technical enough)

    you must click the video link












  • so openai is claimed to be doing great on the FrontierMath dataset. I’ve already seen the usual sort of dipshits using this to pump ai on reddit, and here’s a post that went to the frontpage on HN:

    https://xenaproject.wordpress.com/2024/12/22/can-ai-do-maths-yet-thoughts-from-a-mathematician/

    (tl;dr only a few problems from the dataset are public but if representative the problems are about 25% survivable by an undergrad; coincidentally this is the % openai says their models are completing.)

    this post is by kevin buzzard. he has a let’s say not easily beloved personality, but I don’t think of him as credulous or grifty, and people in his area regard him as an excellent mathematician.

    he points out but I think does not focus enough on how discrediting the secretive nature of the dataset is. the fact that you can’t make it public is necessary to run such experiments in a scientifically reasonable way, but also makes it totally impossible to run the experiment in a scientifically reasonable way. an experiment which cannot be examined or reproduced is actually the opposite of science. it’s pure grift fuel