Jump to content
Banner by ~ StaryStory

Diminishing returns in video game graphics


CastletonSnob

Recommended Posts

Do you think that we're experienced diminishing returns in graphics in video games? I mean, it seems the difference in graphics gets smaller and smaller every generation. 

I don't think we're ever going to get another huge gap in graphics like we did between the 4th (SNES/Genesis) gen and the 5th (N64/PS1/Saturn).

What do you think?

  • Brohoof 1
Link to comment
Share on other sites

personally, i dont care about game graphics, if the gameplay is great, that is all that matters :P 


post-15132-0-19586800-1427239759.jpg

 

20% cooler in 10 seconds flat

Link to comment
Share on other sites

(edited)
1 hour ago, VG_Addict said:

Do you think that we're experienced diminishing returns in graphics in video games? I mean, it seems the difference in graphics gets smaller and smaller every generation. 

I don't think we're ever going to get another huge gap in graphics like we did between the 4th (SNES/Genesis) gen and the 5th (N64/PS1/Saturn).

What do you think?

 

Here's what I think. There is already a MASSIVE gap in graphics between the (Ps3, 360) generation and the (Ps4, Xbone) generation. Games are beginning to take on a lush, almost CGI presentation across the board. Something that was only once possible in cutscenes is now available in real time. Just check out this old Bioshock 2 teaser trailer from countless years ago:

 

^With our current tech, specifically being aided by Unreal Engine 4, we are perfectly capable of pulling off graphical detail like this in game. High end gaming p.c.s are pushing the boundaries of what can be accomplished in the graphical space. Just in the past few years, mobile GPUs have more than tripled in power. Just this gen with Nvidia tech we got the 10 series of video chips that are capable of demolishing 980ti SLI setups from just a few years back. Gtx 1050, 1050 Ti, 1060, 1070, 1080, Titan X and now 1080 Ti AND Titan XP... all in a single generation. The Titan XP possessing 12 Tflops and 12 GB of VRAM

But even on the VASTLY underpowered consoles, we get releases like this:

 

^Fully open world, at 4k 30 frames per second locked, with graphical detail rivaling max settings for some of the most graphically intensive p.c. games to date!

 

Graphics will forever be increasing at a rapid pace. I do not believe we will reach diminishing returns in graphical detail/ flourishes until much much later. Maybe around 20-50 years later. For now, revel in the work these insanely passionate artists and tech heads have put into getting beautiful visuals like this running in our games!

 

 

Edited by K.Rool Addict
Link to comment
Share on other sites

There's a notable principle called Moore's Law that I think has been a notable factor since around 1965; basically, transistors (one of the backbone components in all circuitry and modern electronics) are predicted to double in density per square inch every two years, give or take. Supposedly, this has had a large hand in increasing memory size, computation speed, and overall capability of computer components that deal with processing/logic. Heck, designing new components tends to play off of predicted hardware in other areas following this observation, even if they don't exist yet.

However, partially going off the word of a teacher or two of mine, we are either hitting a wall, or can see it on the horizon, where shrinking the components actually harms their functionality; at a certain point, they're just so small that they don't work right, face interference/instability, or are simply impossible to create. If I'm doing my research right, researchers and manufacturers have created transistors as small as 7-5nm, but at that point, noise/interference and quantum tunneling actually becomes a huge factor, and heck, at that point you're practically working with molecules of width anyways. :ooh: You can't really decrease size much further at that point.

Anyways, so far, Moore's Law has been a core part of rapidly increasing specs/supercomputers/etc, as transistors and similar components being made smaller and smaller yields better performance and specs. And since graphics depends on specs, I think it's reasonable to foresee some kind of diminishing returns occur before too long, or at least generations where you can't really rely on better components to make much of a difference, and instead simply need more.

On the flip side, supposedly the plateau of miniaturization and transistor density leads into trying to increase efficiency in other ways, like better programming methods or redesigning basic computer architecture. Regardless, we likely can't rely on things to improve at the same speed they have been for much longer. Certain areas still have a good bit of improvement left, and may relatedly keep going as fast as it has been, but there is a wall there that we'll either see very slow progress afterwards, or some kind of massive change in computing in a different way to get comparable results. :grin: If I recall, this is actually a part of why quantum computing has garnered a lot of interest, because it would theoretically be a massive improvement over binary architecture. And with games, you are likely to see a lot of improvements from non-hardware sources, like simply redesigning rendering/etc methodology to be more efficient.

  • Brohoof 1
Link to comment
Share on other sites

3 minutes ago, SFyr said:

There's a notable principle called Moore's Law that I think has been a notable factor since around 1965; basically, transistors (one of the backbone components in all circuitry and modern electronics) are predicted to double in density per square inch every two years, give or take. Supposedly, this has had a large hand in increasing memory size, computation speed, and overall capability of computer components that deal with processing/logic. Heck, designing new components tends to play off of predicted hardware in other areas following this observation, even if they don't exist yet.

However, partially going off the word of a teacher or two of mine, we are either hitting a wall, or can see it on the horizon, where shrinking the components actually harms their functionality; at a certain point, they're just so small that they don't work right, face interference/instability, or are simply impossible to create. If I'm doing my research right, researchers and manufacturers have created transistors as small as 7-5nm, but at that point, noise/interference and quantum tunneling actually becomes a huge factor, and heck, at that point you're practically working with molecules of width anyways. :ooh: You can't really decrease size much further at that point.

Anyways, so far, Moore's Law has been a core part of rapidly increasing specs/supercomputers/etc, as transistors and similar components being made smaller and smaller yields better performance and specs. And since graphics depends on specs, I think it's reasonable to foresee some kind of diminishing returns occur before too long, or at least generations where you can't really rely on better components to make much of a difference, and instead simply need more.

On the flip side, supposedly the plateau of miniaturization and transistor density leads into trying to increase efficiency in other ways, like better programming methods or redesigning basic computer architecture. Regardless, we likely can't rely on things to improve at the same speed they have been for much longer. Certain areas still have a good bit of improvement left, and may relatedly keep going as fast as it has been, but there is a wall there that we'll either see very slow progress afterwards, or some kind of massive change in computing in a different way to get comparable results. :grin: If I recall, this is actually a part of why quantum computing has garnered a lot of interest, because it would theoretically be a massive improvement over binary architecture.

 

Question:

Is it really all that necessary to continue to decrease the size of these transistors? Why not just have larger components to facilitate better performance? I get it for laptops and such, but for gaming PCs and consoles it should practically be a non-issue as I see it >.>

Link to comment
Share on other sites

@K.Rool Addict, I think it changes the methods of improvement just a little? ;) Supercomputers are made not only by getting the best parts, but also stacking a ton of them together and coordinating them, which means more cooling, more power draw, more expense, more components devoted to coordination--plus the length of communication pathways between parts may increase? (Which may be somewhat negligible, admittedly, but there's still a reason why RAM and cache memory is located so close to the CPU; it really does make a difference.)

I could see consoles and gaming PCs just going the route of getting bigger, which is fair, but regardless of whatever they do, I think it comes down to a cost/benefit matter. I guess I don't have any strong data on this point to back it up, but it seems like there would be less and less performance-to-cost as you cross this point, unless manufacturing parts gets dramatically cheaper? Which in turn would still mean less benefits from hardware sources, which has been one of the major sources of improvement.

But as I said, this area I'm a little less familiar on (yay markets and manufacturing costs), so don't take my word for it; you might be right that it's less of an issue, but, Moore's law and the steadily/predictably increasing transistor density has been one of the core driving forces behind better computing for decades, and, it's hitting its limit to my knowledge, or at least hitting the point where the limit is on the horizon save for improvements in other areas.

  • Brohoof 1
Link to comment
Share on other sites

4 minutes ago, SFyr said:

@K.Rool Addict, I think it changes the methods of improvement just a little? ;) Supercomputers are made not only by getting the best parts, but also stacking a ton of them together and coordinating them, which means more cooling, more power draw, more expense, more components devoted to coordination--plus the length of communication pathways between parts may increase? (Which may be somewhat negligible, admittedly, but there's still a reason why RAM and cache memory is located so close to the CPU; it really does make a difference.)

I could see consoles and gaming PCs just going the route of getting bigger, which is fair, but regardless of whatever they do, I think it comes down to a cost/benefit matter. I guess I don't have any strong data on this point to back it up, but it seems like there would be less and less performance-to-cost as you cross this point, unless manufacturing parts gets dramatically cheaper? Which in turn would still mean less benefits from hardware sources, which has been one of the major sources of improvement.

But as I said, this area I'm a little less familiar on (yay markets and manufacturing costs), so don't take my word for it; you might be right that it's less of an issue, but, Moore's law and the steadily/predictably increasing transistor density has been one of the core driving forces behind better computing for decades, and, it's hitting its limit to my knowledge, or at least hitting the point where the limit is on the horizon save for improvements in other areas.

 

I would think the more complex and space efficient components would be the more expensive ones :o Larger would probably be cheaper considering the less intensive engineering required to constantly shrink these transistors down.

  • Brohoof 1
Link to comment
Share on other sites

  @K.Rool Addict, maybe so, haha. I think that's still out of my realm of knowledge a bit. :P Although, there's still the cost-to-benefit deal; sometimes an extra 30% cost for 40% performance is still worth it, even if you end up using less of it. And, to my knowledge computer cycles can be so rapid nowadays that communication latency becomes one of major limiting aspects of computing, which is very much tied to how close things are/how long it takes to retrieve/use/store data.

I wouldn't be surprised if consoles/gaming PCs are a good bit behind where the edge is though, except maybe top tier systems that pay to have the best on the market. Still, it seems like small components are still being used for numerous things commercially: https://en.wikipedia.org/wiki/Transistor_count (Note Apple A10, used for the iPhone 7)

Though, thinking about it, maybe I'm arguing more for the most cutting of edge being near its wall, and less consoles/typical gaming PCs that have to be cheap enough to be commercially viable en mass. :please: There could be a good bit of time where improvements largely comes from this cutting edge becoming cheaper to get a hold of.

Link to comment
Share on other sites

2 hours ago, cmarston1 said:

At this point in time I am kinda burnt out on graphics and I don't really see much difference between console generations.

There's a point that graphics have become stagnant and decreasingly different in each subsequent generation, but hey, that's the trend>_>. It's not the same to see the transition between SNES to N64, or PS1 to PS2. I think this focus on graphics has hurt the industry a little, as many only focuses on graphics and visuals over an.... actually engaging gaming experience, JUST LIKE ANIME :lol: 


img-32537-1-post-15132-0-63886300-146778

Sig by Discords

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Join the herd!

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...