I've been decompiling and patching a wide range of software using codex with ida-pro-mcp and radare2 for generic targets, various language-specific tools for .net and java for example.
I'm not paying for the tokens I use, so I just choose whatever is the most performant model OpenAI offers.
My use cases have ranged from malware analysis to adding new features to complicated EOL enterprise software without access to the source code.
> The question is can you match Java's performance without significantly increasing the cost of development and especially evolution in a way that makes the tradeoff worthwhile?
This question is mostly about the person and their way of thinking.
If you have a system optimized for frequent memory allocations, it encourages you to think in terms of small independently allocated objects. Repeat that for a decade or two, and it shapes you as a person.
If you, on the other hand, have a system that always exposes the raw bytes underlying the abstractions, it encourages you to consider the arrays of raw data you are manipulating. Repeat that long enough, and it shapes you as a person.
There are some performance gains from the latter approach. The gains are effectively free, if the approach is natural for you and appropriate to the problem at hand. Because you are processing arrays of data instead of chasing pointers, you benefit from memory locality. And because you are storing fewer pointers and have less memory management overhead, your working set is smaller.
Well maybe you should live outside it instead of being so entitled to dictate what others are allowed to do with their private property and causing so many problems in our society?
I had to use tailscale to bust through port forwarding on chained routers because, even with ports configured correctly, wireguard wasn't able to get through.
My use case was for remote access into a home-hosted Nextcloud instance, via an ISP supplied fibre router (IPv4, not CGNAT), then my own Gl iNet router, then to my Nextcloud instance.
Despite opening up port forwarding correctly, wireguard just couldn't get through that chain, whereas tailscale got through with no problems.
Downside of using tailscale is that it's messy to use at the same time as a VPN on your client device. Split tunnelling supposedly works, but I couldn't get it going.
Unfortunately, we’ve reached the era where pics and shorts are very much no longer proof. In a few minutes you could generate video of that exact scenario.
Horrifying read. I recently read a book about a girl who was pressed into prostitution, and this reads much the same. [1] Before I was convinced that slavery was mostly a thing of the past, how awful to find out this isn't true.
I have been booting from external drives on different hardware since 2007. I was even able to trick Windows XP to boot off of a 12GB SanDisk thumb drive. (Although it was horribly slow!)
Coming back to the author's story, as others have pointed out as well, I do not think it is related to the DFU port itself. I think it depends on the BIOS/UEFI firmware which is addressing those ports, and then the bootloader who is responsible for finding the system (root) volume.
Nowadays these happen with Volume UUIDs hence it should not matter, at least in theory. But even GRUB adds a hint, as discovery just with UUID may fail.
Since we cannot see what actually is happening or see the logs, I would simply say: "Always use the same port for booting and installation." Which usually simplifies the process.
I am quite certain "the undocumented DFU port" was the port author initially used to install macOS to the external drive. Maybe on another Mac/machine. When they change the machine, addressing/enumeration of ports may be different, due to how boot process works. Therefore, let's say you used the port=0x3 in the first install, when you change the machine, you need to find the same port=0x3. Thus being the undocumented-DFU-port author mentions.
> P.S: Also DFU port is for installing firmware (BIOS/UEFI) to the device even before boot occurs. For example, you should connect one end of a USB cable to a working computer (ie. "master"), another end to the DFU port of target (ie. "slave") while the machine that is off. Some specific sequence of power-key combination puts target machine into DFU-mode, where you can overwrite the firmware (UEFI/BIOS, etc) from the working machine... That is the purpose of DFU. -- Or at least access the internal hard-drive/SSD without actually booting the "slave" machine.
Sometimes the start of a greenfield project has a lot of questions along the lines of "what graph plotting library are we going to use? we don't want two competing libraries in the same codebase so we should check it meets all our future needs"
LLMs can select a library and produce a basic implementation while a human is still reading reddit posts arguing about the distinction between 'graphs' and 'charts'.
I'd say it's more likely thanks to more cars and better news coverage, now you hear about every kid being attacked/abused, while in past this was not so widely available knowledge playing into parents fear
I sincerely doubt that if someone hears 'summon' today, they think about Dungeons and Dragons-style summoning of fantasy beings. They more likely hear 'to be made to appear in front of [a state power / a court / ...]"
As such, current understanding is closely aligned to the etymological meaning.
A fast AI image editor & generator. You can upload a photo to edit with text instructions, or create new images from prompts — all in seconds. This platform is an independent product and is not affiliated with Google. We provide access to the nanobanana model through our custom interface.
"People like you" shows that you're no better than the "NIMBYs" you so hate. Just complete refusal to accept that anyone might be different from you or have problems that aren't yours.
> It also has an advanced query planner that embraces the latest theoretical advances in query optimization, especially some techniques to unnest complex multi-join query plans, especially with queries that have a ton of joins. The TUM group has published some great papers on this.
I always wondered how good these planners are in practice. The Neumann/Moerkotte papers are top notch (I've implemented several of them myself), but a planner is much more than its theoretical capabilities; you need so much tweaking and tuning to make anything work well, especially in the cost model. Does anyone have any Umbra experience and can say how well it works for things that are not DBT-3?
I'm not paying for the tokens I use, so I just choose whatever is the most performant model OpenAI offers.
My use cases have ranged from malware analysis to adding new features to complicated EOL enterprise software without access to the source code.