O
0

Serious question, handling the opacity of AI training data in my projects

I'm frustrated that platforms don't disclose which artworks were used to train their models. This makes it hard to assess ethical implications for my commercial work. How do others research this before publishing?
3 comments

Log in to join the discussion

Log In
3 Comments
tyler_nguyen62
Imagine waiting for AI companies to list their training data... that's like expecting a used car salesman to point out every dent. Laura_wright is right about pragmatism winning, but honestly, the whole "transparency is coming soon" line from these platforms is a joke. They've been saying "just around the corner" since the first model dropped. At this point, trusting their disclosure is like believing in fairy tales.
3
beth_bell
beth_bell1mo ago
Try reverse image searches on outputs...
2
laura_wright
Referencing your point about ethical implications being hard to assess, I've always operated on the assumption that all training data is ethically murky. That means for commercial work, I focus on output auditing and licensing my own inputs instead of chasing disclosure. The platforms aren't going to volunteer that info anytime soon, so pragmatism beats idealism here. Honestly, if you're waiting for transparency to make ethical calls, you might never publish anything. Besides, the legal frameworks are so behind that your own due diligence is what really matters in the end.
0