Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)ST
Posts
0
Comments
172
Joined
2 yr. ago

  • I think snakes would probably use tunnels, but I think snakes would probably use just as much concrete as us. We don't use concrete because we like how it looks, we use it despite it's looks. Concrete is just cheap and useful. It wouldn't be any less useful for snakes. Though snakes would probably coat the concrete in something softer & higher friction, like rubber.

  • The word "have" is used in two different ways. One way is to own or hold something, so if I'm holding a pencil, I have it. But another way is as a way so signal different tenses (as in grammatical tense) so you can say "I shouldn't have done it" or "they have tried it before." The contraction "'ve" is only used for tense, but not to own something. So, the phrase "they've it" is grammatically incorrect.

  • I'm assuming English isn't your first language, but "IPoAC would've it's purpose" is grammatically awkward. "Would've" doesn't really work for possession. Instead you can use "would have," but people would typically say "IPoAC has it's purpose"

  • Let's play a little game, then. We bothe give each other descriptions of the projects we made, and we try to make the project based on what we can get out of ChatGPT? We send each other the chat log after a week or something. I'll start: the hierarchical multiscale LSTM is a stacked LSTM where the layer below returns a boundary state which will cause the layer above it to update, if it's true. the final layer is another LSTM that takes the hidden state from every layer, and returns a final hidden state as an embedding of the whole input sequence.

    I can't do this myself, because that would break OpenAI's terms of service, but if you make a model that won't develop I to anything, that's fine. Now, what does your framework do?

    Here's the paper I referenced while implementing it: https://arxiv.org/abs/1807.03595

  • Sorry that my personal experience with ChatGPT is 'wrong.' if you feel the need to insult everyone who disagrees with you, that seems like a better indication of your ability to communicate than mine. Furthermore, I think we're talking about different levels of novelty. You haven't told me the exact nature of the framework you developed, but the things I've tried to use ChatGPT for never turn out too well. I do a lot of ML research, and ChatGPT simply doesn't have the flexibility to help. I was implementing a hierarchical multiscale LSTM, and no matter what I tried ChatGPT kept getting mixed up and implementing more popular models. ChatGPT, due to the way it learns, can only reliably interpolate between the excerpts of text it's been trained on. So I don't doubt ChatGPT was useful for designing your framework, since it is likely similar to other existing frameworks, but for my needs it simply does not work.

  • ChatGPT has never worked well for me. Sure, it can tell you how to center a div, but for anything complex it just fails. ChatGPT is really only useful for elaborating on something. You can give it a well commented code snippet, ask it to add some simple feature to it, and it will sometimes give a correct answer. For coding, it has the same level of experience as a horde of highschool CS students.

  • People keep answering this in the most boring way. Here's a slightly less boring answer:

    Wait for nightfall

    Sneak up to the dino

    Stab it in the eye

    Run into hut

    The T-Rex won't be able to remove the knife, so it will become infected and eventually kill it.

  • You're right. Troi's and Data's hands are messed up, Data has unreal wrinkles on his forehead, the shadow on Picard's neck seems to be a dent, and of course, Troi's nose has a different camera angle on either side.

  • I only vaguely really know what's going on. I did some more research after commenting, and I think I understand a little bit more. The TI bubble memory has two separate layers. On of them, the 'magnetic epitaxial film', basically has a lot of magnetic molecules arranged to point in the same direction. The second layer has circles made of some nickel-iron alloy. What I think is happening is that the actual magnetic bubbles are held on the film, and the iron circles act as tracks the bubbles are pulled along. I don't think electrons in the bubble are actually moving, but I think the electron spin is. That would explain why the loops are capable of moving the bubbles faster than electrons.