• 0 Posts
  • 402 Comments
Joined 2 years ago
cake
Cake day: July 1st, 2023

help-circle
  • Helm Dawson tonemapper is a filmic tonemapper built by EA years ago. It’s very contrasty, similar to ACES (What Unreal mimics in SDR and uses for HDR).

    The problem is, it completely crushes black detail.

    http://www.desmos.com/calculator/nrxjolb4fc

    Here’s it compared to the other common Uncharted2 tonemapper:

    Everything under 0 is crushed.

    To note, it’s exclusively an SDR tonemapper.

    I’ve found this tonemapper in Sleeping Dogs as well and when modding that game for HDR, it was very noticeable there how much it crushed. Nintendo would need to change the tonemapper to an HDR one or, what I think they’ll do, fake the HDR by just scaling up the SDR image.

    To note, I’ve replaced the tonemapper in Echoes of Wisdom with a custom HDR tonemapper via Ryujinx and it’s entirely something Nintendo can do. I just doubt they will.










  • Not all projects needs VC money to get off the ground. I’m not going to hire somebody for a pet project because CMake’s syntax is foreign to me, or a pain in the ass to write. Or I’m not interested in spending 2 hours clicking through their documentation.

    Or if you ever used DirectX the insane “code by committee” way it works. Documentation is ass and at best you need code samples. Hell, I had to ask CoPilot to tell me how something in DXCompiler worked and it told me it worked because the 5000 line cpp file had it somewhere in there. It was right, and to this day, I have no idea how it came up with the correct answer.

    There is no money in most FOSS. Maybe you’ll find somebody who’s interested in your project, but it’s extremely rare somebody latches on. At best, you both have your own unique, personal projects and they overlap. But sitting and waiting for somebody come along and having your project grind to halt is just not a thing if an AI can help write the stuff you’re not familiar with.

    I know “AI bad” and I agree with the sentiment most of the time. But I’m personally okay with the contract of, I feed GitHub my FOSS code and GitHub will host my repo, run my actions, and host my content. I get the AI assistance to write more code. Repeat.


  • There’s a lot of false equivalence in this thread which seems to be a staple of this instance. I’m sure most people here have never used AI coding and I’m just getting ad-hominem “counterpoints”.

    Nothing I said even close to saying AI is a full replacement for training junior devs.

    The reality is, when you actually use an AI as a coding assistant there are strong similarities when training somebody who is new to coding. They’ll choose popular over best practices. When I get an AI assisted code segment, it feels similar to copypasted code from a stackoverflow. This is aside from the hallucinations.

    But LLM operate on patterns, for better or for worse. If you want to generate something serious, that’s a bad idea. There’s a strong misconception that AI will build usable code for you. It probably won’t. It’s only good at snippets. But it does recognize patterns. Some of those patterns are tedious to write, and I’d argue feel even more tedious the more experienced you are in coding.

    My most recent usage of AI was making some script that uses WinGet to setup a dev environment. Like I have a vague recollection of how to make a .cmd script with if branches, but not enough at the top of my head. So you can say “Generate a section here that checks if WinSDK is installed.” And it will. Looks fine, move on. The %errorlevel% code is all injected. Then say “add on a WinGet install if it’s not installed.” Then it does that. Then I have to repeat all that again for ninja, clang, and others. None of this is mission critical, but it’s a chore to write. It’ll even sprinkle some pretty CLI output text.

    There is a strong misconception that AI are “smart” and programmers should be worried. That’s completely overselling what AI can do and probably intentionally by executives. They are at best assistant to coders. I can take a piece of JS code and ask AI to construct an SQL table creation query based on the code (or vice versa). It’s not difficult. Just tedious.

    When working in teams, it’s not uncommon for me to create the first 5%-10% of a project and instruct others on the team to take that as input and scale the rest of the project (eg: design views, build test, build tables, etc).

    There are clear parallels here. You need to recognize the limitations, but there is a lot of functionality they can provide as long as you understand what it can’t do. Read the comments of people who have actually sat down and used it and you’ll see we’ve the same conclusion.








  • Definitely not. NoJS is not better for accessibility. It’s worse.

    You need to set the ARIA states over JS. Believe me, I’ve written an entire component library with this in mind. I thought that NoJS would be better, having a HTML and CSS core and adding on JS after. Then for my second rewrite, I made it JS first and it’s all around better for accessibility. Without JS you’d be leaning into a slew of hacks that just make accessibility suffer. It’s neat to make those NoJS components, but you have to hijack checkbox or radio buttons in ways not intended to work.

    The needs of those with disabilities far outweigh the needs of those who want a no script environment.

    While with WAI ARIA you can just quickly assert that the page is compliant with a checker before pushing it to live.

    Also no. You cannot check accessibility with HTML tags alone. Full stop. You need to check the ARIA tags manually. You need to ensure states are updated. You need to add custom JS to handle key events to ensure your components work as suggested by the ARIA Practices page. Relying on native components is not enough. They get you somewhere there, but you’ll also run into incomplete native components that don’t work as expected (eg: Safari and touch events don’t work the same as Chrome and Firefox).

    The sad thing is that accessibility testing is still rather poor. Chrome has the best way to automate testing against the accessibility tree, but it’s still hit or miss at times. It’s worse with Firefox and Safari. You need to doubly confirm with manual testing to ensure the ARIA states are reported correctly. Even with attributes set correctly there’s no guarantee it’ll be handled properly by browsers.

    I have a list of bugs still not fixed by browsers but at least have written my workarounds for them and they are required JS to work as expected and have proper accessibility.

    Good news is that we were able to stop the Playwright devs from adopting this poor approach of relying on HTML only for ARIA testing and now can take accessibility tree snapshots based on realtime JS values.