Jump to content
Sign in to follow this  
hysterix

Alienware Graphics Amplifier Alienware Ampliefer welche Karten passen rein? RTX

Recommended Posts

Hallo, ich wollte mal wissen, welche Grafikkarten in den Ampliefer rein passen und das man das Gehäsuse auch nnoch zu bekommt.

RTX2080 RTX2080 TI

Share this post


Link to post
Share on other sites

Hier die Liste der unterstützen Grafikkarten: https://www.dell.com/support/article/de-de/sln300946/alienware-graphics-amplifier-supported-graphics-card-list?lang=en

Du musst aber auf die Größe der Karte achten. Laut der verlinkten Seite sollte diese nicht länger als 10,5 Inch (26,7 cm) und nich höher als 2 Erweiterungsslots sein. Manche Grafikkarten sind auch zu hoch, dann lässt sich der AGA nicht mehr schließen. Ein genaues Maß habe ich nicht gefunden, wurde hier im Forum aber schon besprochen.

Am 26.9.2020 um 10:55 schrieb hysterix:

RTX2080 RTX2080 TI

Warum willst du eine alte Karte verbauen? Warte besser, bis die RTX 30X0 Karten besser verfügbar sind.

Edited by einsteinchen

Share this post


Link to post
Share on other sites
vor 2 Stunden schrieb einsteinchen:

Hier die Liste der unterstützen Grafikkarten: https://www.dell.com/support/article/de-de/sln300946/alienware-graphics-amplifier-supported-graphics-card-list?lang=en

Du musst aber auf die Größe der Karte achten. Laut der verlinkten Seite sollte diese nicht länger als 10,5 Inch (26,7 cm) und nich höher als 2 Erweiterungsslots sein. Manche Grafikkarten sind auch zu hoch, dann lässt sich der AGA nicht mehr schließen. Ein genaues Maß habe ich nicht gefunden, wurde hier im Forum aber schon besprochen.

Warum willst du eine alte Karte verbauen? Warte besser, bis die RTX 30X0 Karten besser verfügbar sind.

Ich warte ja auch erst die 3070 und 3060 Karten ab. Aber die Preise der 2080ti wird denke ich drastisch sinken . Die fonders Edition Karten müssten doch eher passen? 

Share this post


Link to post
Share on other sites

Achso unf reicht das verbaute Netzteil aus oder sollte man ein anderes einbauen ? Falls ja welches wäre passend?

Edited by hysterix

Share this post


Link to post
Share on other sites

Das kommt auf die Grafikkarte an. Wie viel Watt hat das verbaute Netzteil? Auch brauchst du genug 6+2 Pin PCIe Stromkabel. Die stromhungrigste, aktuelle Grafikkarte ist die RTX 3090, welche unter Last auch mal 450 Watt verbrauchen kann. Die RTX 20X0 Serie und schwächere RTX 30X0 Karten verbrauchen weniger.

Share this post


Link to post
Share on other sites
vor 2 Stunden schrieb einsteinchen:

Das kommt auf die Grafikkarte an. Wie viel Watt hat das verbaute Netzteil? Auch brauchst du genug 6+2 Pin PCIe Stromkabel. Die stromhungrigste, aktuelle Grafikkarte ist die RTX 3090, welche unter Last auch mal 450 Watt verbrauchen kann. Die RTX 20X0 Serie und schwächere RTX 30X0 Karten verbrauchen weniger.

Da ist ein 460 Watt Netzteil verbaut . 2x8 pin Stecker wo man aber auf 2x6 pin gehen kann oder halt 8x6 je nachdem was gebraucht wird 

Share this post


Link to post
Share on other sites

Dann kannst du damit die meisten Grafikkarten betreiben, wenn sie physisch rein passen :)

Share this post


Link to post
Share on other sites
Am 28.9.2020 um 10:49 schrieb einsteinchen:

Dann kannst du damit die meisten Grafikkarten betreiben, wenn sie physisch rein passen :)

Die Founder Edition scheint ja fast generell zu passen? Evtl warte ich auch noch Big Navi ab von Amd.

Aktuell läuft mein I7 9750H undervoltet auf allen Kernen mit 3,6 GHZ ,aber wenn die Interne GPU deaktiviert ist sollten auch 4 GHZ möglich sein die ja eigentlich selbst für eine RTX2080 ti reichen sollten.

Edited by hysterix

Share this post


Link to post
Share on other sites
vor 3 Stunden schrieb hysterix:

Die Founder Edition scheint ja fast generell zu passen?

Nicht die RTX 3090. Bei der RTX 3080 bin ich mir unsicher, schau auf die Abmaße.

Generell ist zu sagen, dass je schneller die GPU ist, desto mehr limitiert die CPU und desto mehr Verlust hast du mit dem AGA (der ist nur mit 4 x PCIe 3.0 angeschlossen). Wenn du eine RTX 3080 in den AGA Packst und die mit einer mobilen CPU kombinierst, darfst du keine Meisterleistung erwarten. Schneller als die dedizierte GPU im Notebook, aber nicht so schnell wie mit der gleichen Karte im Dekstop PC.

Da die neuen RTX 30X0 Karten auch PCIe 4.0 unterstützen weiß ich nicht wie die Karte auf nur 4x PCIe 3.0 reagiert. Sollte eignetlich funktioniere, nach Erfahrungsberichten ausschau halten ist immer sinnvoll.

Share this post


Link to post
Share on other sites

Also die 3090 kommt nicht in Frage brauch ich nicht. Die 3080 auch nicht . Ich setze wenn eher auf die 3070 oder 2080 super oder 2080 ti.  Mehr wie 450 Euro will ich nicht ausgeben . Zu Weihnachten werde ich mir eine holen bis dahin sollte sich ja was tun bei 3070 

Edited by hysterix

Share this post


Link to post
Share on other sites
vor 37 Minuten schrieb hysterix:

Ich setze wenn eher auf die 3070 oder 2080 super oder 2080 ti

Die Founders Edition der RTX 20X0 sollten tatsächlich alle passen. Da gibt es auch gute Custom Designs. Tendenziell würde ich auch auf die RTX 3070 warten.

Share this post


Link to post
Share on other sites
 
Spoiler

 

Mal was interessantes von nbfr zum Thema bottleneck aga.
Hab gesehen einer hat die 3080 im razer Core getestet und hatte keine Mehrleistung zur Notebook 2080 ;) mal schauen wie die 3080 im Aga performt.
 
  • Anyone with the i7-9750h and an AGA RTX 2080 or lower will experience little to no bottlenecking due to the AGA when using an external monitor of 1440p or above. Folks using 1080p displays can expect a deficit of 5-7% at most as compared to an equivalent desktop.
  • People with elite hardware (i9-9880h / AGA RTX 2080ti) should expect a worst case AGA bottleneck of 10% at 1080p, 7% at 1440p and 5% at 4k compared to an equivalent desktop.
  • Severe cases of AGA bottlenecking can be seen at the extremes, with as much as a 30% decrease in framerate seen in certain esports titles played at 1024x768 resolutions and low settings. These experiences are rare and limited to situations where very high end hardware (51m + 9900ks + AGA RTX Titan) is used with exceptionally low visual settings.
Am 29.9.2020 um 21:58 schrieb einsteinchen:

Nicht die RTX 3090. Bei der RTX 3080 bin ich mir unsicher, schau auf die Abmaße.

Generell ist zu sagen, dass je schneller die GPU ist, desto mehr limitiert die CPU und desto mehr Verlust hast du mit dem AGA (der ist nur mit 4 x PCIe 3.0 angeschlossen). Wenn du eine RTX 3080 in den AGA Packst und die mit einer mobilen CPU kombinierst, darfst du keine Meisterleistung erwarten. Schneller als die dedizierte GPU im Notebook, aber nicht so schnell wie mit der gleichen Karte im Dekstop PC.

Da die neuen RTX 30X0 Karten auch PCIe 4.0 unterstützen weiß ich nicht wie die Karte auf nur 4x PCIe 3.0 reagiert. Sollte eignetlich funktioniere, nach Erfahrungsberichten ausschau halten ist immer sinnvoll.

 
Mal was interessantes von nbfr zum Thema bottleneck aga

Unfortunately, there is not a simple answer to this question. I'll try to give the best explanation I can because it gets asked a lot and misinformation abounds. For context, my comments apply to the recent Alienware models post 15R3 / 17R4. Some of the earlier devices did whacky things with the routing of PCIe lanes and resource allocation that make them a little different. TL;DR at the end.

Cause of the AGA Bottleneck
The bottleneck imposed by the AGA is a direct function of how much information needs to be sent over it's 4x PCIe 3.0 bus. The interface technically suffers a very small amount of bandwidth loss compared to the 4x PCIe 3.0 bus of a typical desktop computer because of the way it is implemented in the AGA. The caldera connector, cable length and inferior EMI shielding inherent in it's design all contribute, but it's a very small amount we can ignore for the sake of explanation.

The amount of information transmitting the PCIe bus is primarily determined by how much work the CPU has to perform and subsequently transmit to the GPU and vice versa. In most cases--especially when gaming--the majority of PCIe traffic navigates the bus from CPU to GPU, so this is where we are most worried about bottlenecking. This assumption does not necessarily hold when the GPU is working on a task in a headless configuration (no display being drawn at all) or when an image is being rendered on one GPU and drawn on a display attached to another GPU. For simplicity, let's assume we are discussing the AGA bottleneck when gaming on a display directly attached to the GPU in the AGA. When using the AGA to render an image and output it on the internal laptop display, users will experience a wide range of performance results, influenced by both the model of laptop, the version of Windows being used and the OS settings applied. Microsoft has been very active on finding new ways to accommodate systems with multiple GPUs, rendering and outputting on different GPU devices, GPU scheduling, so the results in such a condition likely to change even over the next few months.

Examples of where the AGA bottleneck does not exist
If an AGA bottleneck is a function of how much information transmits the PCIe bus, let's use a few practical examples to understand where the PCIe bus limits do and do not exist. I am going to generalize a bit here for sake of example, but the result holds true on average. In almost all cases, a CPU will have more information to transmit across the PCIe bus to the GPU when frame rates increase, all else constant. Frame rates are not the only factor, but they are an easy one to understand. For most games, a CPU generates much more information to send to the GPU at 120 FPS compared to 60 FPS, assuming the frame rate is all that changes. Such is the case when a frame limiter is used and no other visual settings or the resolution is changed. High CPU loads alone are insufficient to create a bottleneck. For example, in Civilization 6, the AI turn times towards the end of the game can get very slow while the CPU works hard to calculate the thousands of actions the AI must perform. Despite working the CPU hard, these late game AI turns have largely no effect on any bottleneck imposed by the AGA because very little of this work performed by the CPU needs to be sent to the GPU.

Similarly, extremely high GPU loads alone do not exacerbate an AGA bottleneck. A very simple game requiring minimal CPU power but drawn at an absurd 16k resolution will not require a lot of information sent across the PCIe bus, but will require an enormous amount of horsepower from the GPU. Another extreme example of GPU bottlenecking with minimal stress on the PCIe bus exists in a lot of non-gaming GPU work such as data science and crypto currency mining. If you have a moment to look at a crypto currency mining motherboard, you'll find many have a dozen or more 1x PCIe 2.0 slots to populate with powerful GPUs. Despite each one of these slots having 1/8th the bandwidth of even our lowly AGA, the 1x PCIe 2.0 interface does not bottleneck even the most powerful GPUs when it comes to crypto mining. The PCIe bus in this case is only used to send raw data to fill the VRAM, which the GPU slowly processes. The rate the GPU works through the data stored in its VRAM is slower than even the abysmal bitrate of a 1x PCIe 2.0 interface. Again, no bottleneck exists.

Where the AGA bottleneck does exist
So if the above extreme cases are where an AGA bottleneck does not exist, where will an AGA bottleneck be the most pronounced? The answer is where neither the GPU nor CPU are bottlenecks themselves, yet the information transmitted from CPU to GPU remains high.One such situation exists when playing relatively modern titles at low resolutions, and with exceptionally high frame rates. Such is the case with esports titles, where a game like CS:GO is still regularly played at very low resolutions of 1024x768. Even these games played on these competitive graphics settings are likely to hit a CPU or GPU bottleneck before the PCIe bottleneck on some hardware. Such was the case of my first AGA compatible 15R4 (i7-8750h), where I used a GTX 1080 in the AGA. The worst PCIe bottleneck I ever experienced (and was able to reliably measure) with this system was in CS:GO at 1024x768 and low settings. The AGA bottleneck in this scenario was an average frame reduction of about 10%, as measured against the exact same GPU in a desktop with an 8700k tuned to the equivalent TDP and frequency limits of the Alienware's 8750h. The case of CS:GO at 1024x768 low was the worst result I was able to identify after trying about 8 different games, all at unrealistically low resolutions. Rocket League also exhibited very similar behavior to CS:GO at 1280x800. While this is a seemingly significant result, the resolution is ridiculously low. I never experienced a bottleneck with this 15R4/AGA GTX1080 combination when playing a game at 1080p resolution or higher, no matter the game or graphics settings configuration. The CPU or the GPU would hit a wall before the PCIe bus bottlenecked.

The 15R4 and GTX 1080 is showing its age, so what about a more modern Alienware configuration? At the other end of the spectrum is my current system: an Area 51m with a 9900ks (tuned to only 4.4Ghz for heat & noise) and a Titan RTX in the AGA. This is more power than almost anyone else will try to force through the AGA, but it presents a useful anecdote at the other extreme end of the spectrum. In the same conditions as the earlier CS:GO test on the 15R4, the Area 51m exhibits an average reduction of 28% in framerate compared to the exact same CPU (identical power and frequency limits) and GPU in a Z390 desktop build. While this is a seemingly huge deficit in comparison to that exhibited by the 15r4, remember this is at 1024x768, low settings and with the best CPU + GPU combination available in an Alienware laptop + AGA (at least until the A51m R2 hits shelves.) At a more realistic 1080p resolution, the deficit was only 11% on average compared to the desktop equivalent. The gap shrunk to 7% and 4% at 1440p and 4k respectively.

While I was never able to reliably measure a drop in performance for the 15R4+GTX1080 system compared to its desktop counterpart at 1080p or above, the same cannot be said for the 51m + Titan RTX system. In the 8 games I tried, my experience is that you should expect a decrease in performance of 5-10%, compared to a desktop equivalent using the AGA with this level of elite hardware at 1080p. To oversimplify this result a bit, as the average FPS of the game increases, so too will the prevalence of an AGA bottleneck. At 1440p, the performance difference shrinks to between 4-7% and at 4k it closes to 2-5%. 

I have also tried similar testing the 51m with a 2070S and 2080 in the AGA. I concluded there is effectively no practical difference between using them in the AGA compared to a desktop. Perhaps 1-3% variation at most, but I'm certainly not sharp enough to recognize a difference that small and it could very well be within the margin of error for the hardware. The difference grew slightly at 1080p, but still always remained within 5% of the desktop.

Last thoughts
A few last things to mention. First, Nvidia is emphasizing GPU features such as tensor cores and raytracing that largely perform without help from a CPU. Turning on raytracing on a compatible AGA GPU results in the expected reduction to framerates we would expect from raytracing, but it causes the GPU to bottleneck earlier, reducing the performance impact of an AGA attributable bottleneck. Said another way, the cost of enabling raytracing on an AGA RTX GPU is comparatively less than enabling RTX in a desktop. If a desktop were to drop 20% framerate performance by enabling RTX, enabling it on an AGA RTX GPU might only drop your framerate by 15%. This isn't particularly novel, but if features like raytracing are important to you, the AGA won't hold you back.

Second, Microsoft is moving towards more work being done on a GPU without help from the CPU through scheduling and DX12. I expect this to further reduce the stress on the PCIe bus as functions get moved directly to the GPU and no longer need to traverse the PCIe bus. I plan to test some of the new Windows and Nvidia driver updates in the AGA next month to see what improvements have been made.

Third, many gamers with the money to spend on a fancy Alienware laptop, AGA and another GPU will also do their AGA gaming on an external monitor with a resolution higher than 1080. Very little bottlenecking exists under these conditions, even with extremely high end hardware. Only the most devout esports gamers who want to play at exceptionally low resolutions (<720p) and high FPS should absolutely shy away from the AGA in favor of a desktop because of AGA bottlenecking.

Lastly, a well-thought-out system and properly configured game settings will always provide the best experience. Please don't pair an 8750h with a 2080ti and expect to get the same FPS as people with the same GPU in a 9900k desktop. Your laptop chip is inferior to the desktop ones used for GPU reviews. Even the 9900ks in my 51m is grossly inferior to the same chip unrestricted under Noctua copper in a desktop. These are not the fault of the AGA, but rather the result of our world's current fascination with thin-and-light machines. You can always modify your graphics settings to find the right balance of visuals and frames that work to minimize any premature GPU, CPU or AGA PCIe bottleneck. 

Overall, the AGA is still a fantastic performer in 2020 and outperforms any TB3 enclosure. 

 




TL;DR

  • Anyone with the i7-9750h and an AGA RTX 2080 or lower will experience little to no bottlenecking due to the AGA when using an external monitor of 1440p or above. Folks using 1080p displays can expect a deficit of 5-7% at most as compared to an equivalent desktop.
  • People with elite hardware (i9-9880h / AGA RTX 2080ti) should expect a worst case AGA bottleneck of 10% at 1080p, 7% at 1440p and 5% at 4k compared to an equivalent desktop.
  • Severe cases of AGA bottlenecking can be seen at the extremes, with as much as a 30% decrease in framerate seen in certain esports titles played at 1024x768 resolutions and low settings. These experiences are rare and limited to situations where very high end hardware (51m + 9900ks + AGA RTX Titan) is used with exceptionally low visual settings.

Share this post


Link to post
Share on other sites

@Nuestra kannst du mal den Text richtig formatieren? Ich sehe teilweise nix weil der Text und der Hintergrund schwarz ist :D

 

Bildschirmfoto 2020-09-30 um 12.44.12.png

Share this post


Link to post
Share on other sites

Also die 3090 wurde auch schon als egpu getestet 

 

 

 

Share this post


Link to post
Share on other sites


Hab gesehen einer hat die 3080 im razer Core getestet und hatte keine Mehrleistung zur Notebook 2080 ;) mal schauen wie die 3080 im Aga performt.

 


 

Spoiler

 

Mal was interessantes von nbfr zum Thema bottleneck aga.

Anyone with the i7-9750h and an AGA RTX 2080 or lower will experience little to no bottlenecking due to the AGA when using an external monitor of 1440p or above. Folks using 1080p displays can expect a deficit of 5-7% at most as compared to an equivalent desktop.
People with elite hardware (i9-9880h / AGA RTX 2080ti) should expect a worst case AGA bottleneck of 10% at 1080p, 7% at 1440p and 5% at 4k compared to an equivalent desktop.
Severe cases of AGA bottlenecking can be seen at the extremes, with as much as a 30% decrease in framerate seen in certain esports titles played at 1024x768 resolutions and low settings. These experiences are rare and limited to situations where very high end hardware (51m + 9900ks + AGA RTX Titan) is used with exceptionally low visual settings.

 


Unfortunately, there is not a simple answer to this question. I'll try to give the best explanation I can because it gets asked a lot and misinformation abounds. For context, my comments apply to the recent Alienware models post 15R3 / 17R4. Some of the earlier devices did whacky things with the routing of PCIe lanes and resource allocation that make them a little different. TL;DR at the end.

Cause of the AGA Bottleneck
The bottleneck imposed by the AGA is a direct function of how much information needs to be sent over it's 4x PCIe 3.0 bus. The interface technically suffers a very small amount of bandwidth loss compared to the 4x PCIe 3.0 bus of a typical desktop computer because of the way it is implemented in the AGA. The caldera connector, cable length and inferior EMI shielding inherent in it's design all contribute, but it's a very small amount we can ignore for the sake of explanation.

The amount of information transmitting the PCIe bus is primarily determined by how much work the CPU has to perform and subsequently transmit to the GPU and vice versa. In most cases--especially when gaming--the majority of PCIe traffic navigates the bus from CPU to GPU, so this is where we are most worried about bottlenecking. This assumption does not necessarily hold when the GPU is working on a task in a headless configuration (no display being drawn at all) or when an image is being rendered on one GPU and drawn on a display attached to another GPU. For simplicity, let's assume we are discussing the AGA bottleneck when gaming on a display directly attached to the GPU in the AGA. When using the AGA to render an image and output it on the internal laptop display, users will experience a wide range of performance results, influenced by both the model of laptop, the version of Windows being used and the OS settings applied. Microsoft has been very active on finding new ways to accommodate systems with multiple GPUs, rendering and outputting on different GPU devices, GPU scheduling, so the results in such a condition likely to change even over the next few months.

Examples of where the AGA bottleneck does not exist
If an AGA bottleneck is a function of how much information transmits the PCIe bus, let's use a few practical examples to understand where the PCIe bus limits do and do not exist. I am going to generalize a bit here for sake of example, but the result holds true on average. In almost all cases, a CPU will have more information to transmit across the PCIe bus to the GPU when frame rates increase, all else constant. Frame rates are not the only factor, but they are an easy one to understand. For most games, a CPU generates much more information to send to the GPU at 120 FPS compared to 60 FPS, assuming the frame rate is all that changes. Such is the case when a frame limiter is used and no other visual settings or the resolution is changed. High CPU loads alone are insufficient to create a bottleneck. For example, in Civilization 6, the AI turn times towards the end of the game can get very slow while the CPU works hard to calculate the thousands of actions the AI must perform. Despite working the CPU hard, these late game AI turns have largely no effect on any bottleneck imposed by the AGA because very little of this work performed by the CPU needs to be sent to the GPU.

Similarly, extremely high GPU loads alone do not exacerbate an AGA bottleneck. A very simple game requiring minimal CPU power but drawn at an absurd 16k resolution will not require a lot of information sent across the PCIe bus, but will require an enormous amount of horsepower from the GPU. Another extreme example of GPU bottlenecking with minimal stress on the PCIe bus exists in a lot of non-gaming GPU work such as data science and crypto currency mining. If you have a moment to look at a crypto currency mining motherboard, you'll find many have a dozen or more 1x PCIe 2.0 slots to populate with powerful GPUs. Despite each one of these slots having 1/8th the bandwidth of even our lowly AGA, the 1x PCIe 2.0 interface does not bottleneck even the most powerful GPUs when it comes to crypto mining. The PCIe bus in this case is only used to send raw data to fill the VRAM, which the GPU slowly processes. The rate the GPU works through the data stored in its VRAM is slower than even the abysmal bitrate of a 1x PCIe 2.0 interface. Again, no bottleneck exists.

Where the AGA bottleneck does exist
So if the above extreme cases are where an AGA bottleneck does not exist, where will an AGA bottleneck be the most pronounced? The answer is where neither the GPU nor CPU are bottlenecks themselves, yet the information transmitted from CPU to GPU remains high.One such situation exists when playing relatively modern titles at low resolutions, and with exceptionally high frame rates. Such is the case with esports titles, where a game like CS:GO is still regularly played at very low resolutions of 1024x768. Even these games played on these competitive graphics settings are likely to hit a CPU or GPU bottleneck before the PCIe bottleneck on some hardware. Such was the case of my first AGA compatible 15R4 (i7-8750h), where I used a GTX 1080 in the AGA. The worst PCIe bottleneck I ever experienced (and was able to reliably measure) with this system was in CS:GO at 1024x768 and low settings. The AGA bottleneck in this scenario was an average frame reduction of about 10%, as measured against the exact same GPU in a desktop with an 8700k tuned to the equivalent TDP and frequency limits of the Alienware's 8750h. The case of CS:GO at 1024x768 low was the worst result I was able to identify after trying about 8 different games, all at unrealistically low resolutions. Rocket League also exhibited very similar behavior to CS:GO at 1280x800. While this is a seemingly significant result, the resolution is ridiculously low. I never experienced a bottleneck with this 15R4/AGA GTX1080 combination when playing a game at 1080p resolution or higher, no matter the game or graphics settings configuration. The CPU or the GPU would hit a wall before the PCIe bus bottlenecked.

The 15R4 and GTX 1080 is showing its age, so what about a more modern Alienware configuration? At the other end of the spectrum is my current system: an Area 51m with a 9900ks (tuned to only 4.4Ghz for heat & noise) and a Titan RTX in the AGA. This is more power than almost anyone else will try to force through the AGA, but it presents a useful anecdote at the other extreme end of the spectrum. In the same conditions as the earlier CS:GO test on the 15R4, the Area 51m exhibits an average reduction of 28% in framerate compared to the exact same CPU (identical power and frequency limits) and GPU in a Z390 desktop build. While this is a seemingly huge deficit in comparison to that exhibited by the 15r4, remember this is at 1024x768, low settings and with the best CPU + GPU combination available in an Alienware laptop + AGA (at least until the A51m R2 hits shelves.) At a more realistic 1080p resolution, the deficit was only 11% on average compared to the desktop equivalent. The gap shrunk to 7% and 4% at 1440p and 4k respectively.

While I was never able to reliably measure a drop in performance for the 15R4+GTX1080 system compared to its desktop counterpart at 1080p or above, the same cannot be said for the 51m + Titan RTX system. In the 8 games I tried, my experience is that you should expect a decrease in performance of 5-10%, compared to a desktop equivalent using the AGA with this level of elite hardware at 1080p. To oversimplify this result a bit, as the average FPS of the game increases, so too will the prevalence of an AGA bottleneck. At 1440p, the performance difference shrinks to between 4-7% and at 4k it closes to 2-5%.

I have also tried similar testing the 51m with a 2070S and 2080 in the AGA. I concluded there is effectively no practical difference between using them in the AGA compared to a desktop. Perhaps 1-3% variation at most, but I'm certainly not sharp enough to recognize a difference that small and it could very well be within the margin of error for the hardware. The difference grew slightly at 1080p, but still always remained within 5% of the desktop.

Last thoughts
A few last things to mention. First, Nvidia is emphasizing GPU features such as tensor cores and raytracing that largely perform without help from a CPU. Turning on raytracing on a compatible AGA GPU results in the expected reduction to framerates we would expect from raytracing, but it causes the GPU to bottleneck earlier, reducing the performance impact of an AGA attributable bottleneck. Said another way, the cost of enabling raytracing on an AGA RTX GPU is comparatively less than enabling RTX in a desktop. If a desktop were to drop 20% framerate performance by enabling RTX, enabling it on an AGA RTX GPU might only drop your framerate by 15%. This isn't particularly novel, but if features like raytracing are important to you, the AGA won't hold you back.

Second, Microsoft is moving towards more work being done on a GPU without help from the CPU through scheduling and DX12. I expect this to further reduce the stress on the PCIe bus as functions get moved directly to the GPU and no longer need to traverse the PCIe bus. I plan to test some of the new Windows and Nvidia driver updates in the AGA next month to see what improvements have been made.

Third, many gamers with the money to spend on a fancy Alienware laptop, AGA and another GPU will also do their AGA gaming on an external monitor with a resolution higher than 1080. Very little bottlenecking exists under these conditions, even with extremely high end hardware. Only the most devout esports gamers who want to play at exceptionally low resolutions (<720p) and high FPS should absolutely shy away from the AGA in favor of a desktop because of AGA bottlenecking.

Lastly, a well-thought-out system and properly configured game settings will always provide the best experience. Please don't pair an 8750h with a 2080ti and expect to get the same FPS as people with the same GPU in a 9900k desktop. Your laptop chip is inferior to the desktop ones used for GPU reviews. Even the 9900ks in my 51m is grossly inferior to the same chip unrestricted under Noctua copper in a desktop. These are not the fault of the AGA, but rather the result of our world's current fascination with thin-and-light machines. You can always modify your graphics settings to find the right balance of visuals and frames that work to minimize any premature GPU, CPU or AGA PCIe bottleneck.

Overall, the AGA is still a fantastic performer in 2020 and outperforms any TB3 enclosure.


 

mit google übersetzer

Leider gibt es keine einfache Antwort auf diese Frage. Ich werde versuchen, die bestmögliche Erklärung zu geben, da sie häufig gefragt wird und es viele Fehlinformationen gibt. Für den Kontext gelten meine Kommentare für die neuesten Alienware-Modelle nach 15R3 / 17R4. Einige der früheren Geräte haben mit dem Routing von PCIe-Lanes und der Ressourcenzuweisung verrückte Dinge gemacht, die sie ein wenig anders machen. TL; DR am Ende.

Ursache des AGA-Engpasses
Der von der AGA auferlegte Engpass hängt direkt davon ab, wie viele Informationen über den 4x PCIe 3.0-Bus gesendet werden müssen. Die Schnittstelle weist im Vergleich zum 4x PCIe 3.0-Bus eines typischen Desktop-Computers aufgrund der Art und Weise, wie sie in der AGA implementiert ist, technisch einen sehr geringen Bandbreitenverlust auf. Der Caldera-Stecker, die Kabellänge und die minderwertige EMI-Abschirmung, die dem Design eigen sind, tragen alle dazu bei, aber es ist eine sehr kleine Menge, die wir zur Erklärung ignorieren können.

Die Menge an Informationen, die den PCIe-Bus übertragen, wird hauptsächlich davon bestimmt, wie viel Arbeit die CPU ausführen und anschließend an die GPU übertragen muss und umgekehrt. In den meisten Fällen - insbesondere beim Spielen - navigiert der Großteil des PCIe-Datenverkehrs über den Bus von der CPU zur GPU. Daher sind wir hier am meisten besorgt über Engpässe. Diese Annahme gilt nicht unbedingt, wenn die GPU an einer Aufgabe in einer kopflosen Konfiguration arbeitet (es wird überhaupt keine Anzeige gezeichnet) oder wenn ein Bild auf einer GPU gerendert und auf einer Anzeige gezeichnet wird, die an eine andere GPU angeschlossen ist. Nehmen wir zur Vereinfachung an, wir diskutieren den AGA-Engpass beim Spielen auf einem Display, das direkt an die GPU in der AGA angeschlossen ist. Wenn Sie die AGA verwenden, um ein Bild zu rendern und es auf dem internen Laptop-Display auszugeben, können Benutzer eine Vielzahl von Leistungsergebnissen erzielen, die sowohl vom Laptop-Modell als auch von der verwendeten Windows-Version und den angewendeten Betriebssystemeinstellungen beeinflusst werden. Microsoft war sehr aktiv bei der Suche nach neuen Wegen, um Systeme mit mehreren GPUs unterzubringen, das Rendern und Ausgeben auf verschiedenen GPU-Geräten sowie die GPU-Planung, sodass sich die Ergebnisse in einem solchen Zustand wahrscheinlich auch in den nächsten Monaten ändern werden.

Beispiele dafür, wo der AGA-Engpass nicht besteht
Wenn ein AGA-Engpass davon abhängt, wie viele Informationen den PCIe-Bus übertragen, verwenden wir einige praktische Beispiele, um zu verstehen, wo die PCIe-Busgrenzen existieren und wo nicht. Ich werde hier zum Beispiel ein wenig verallgemeinern, aber das Ergebnis gilt im Durchschnitt. In fast allen Fällen verfügt eine CPU über mehr Informationen, die über den PCIe-Bus an die GPU übertragen werden können, wenn die Bildraten steigen, alles andere ist konstant. Bildraten sind nicht der einzige Faktor, aber sie sind leicht zu verstehen. Bei den meisten Spielen generiert eine CPU mit 120 FPS viel mehr Informationen zum Senden an die GPU als mit 60 FPS, vorausgesetzt, die Framerate ändert sich nur. Dies ist der Fall, wenn ein Frame-Limiter verwendet wird und keine anderen visuellen Einstellungen oder die Auflösung geändert werden. Hohe CPU-Auslastungen allein reichen nicht aus, um einen Engpass zu schaffen. In Civilization 6 können beispielsweise die KI-Umdrehungszeiten gegen Ende des Spiels sehr langsam werden, während die CPU hart daran arbeitet, die Tausenden von Aktionen zu berechnen, die die KI ausführen muss. Trotz harter Arbeit an der CPU haben diese späten KI-Runden des Spiels weitgehend keine Auswirkungen auf einen von der AGA auferlegten Engpass, da nur sehr wenig von der von der CPU geleistete Arbeit an die GPU gesendet werden muss.

In ähnlicher Weise verschärfen extrem hohe GPU-Lasten allein keinen AGA-Engpass. Ein sehr einfaches Spiel, das nur minimale CPU-Leistung erfordert, aber mit einer absurden 16k-Auflösung erstellt wurde, erfordert nicht viele Informationen, die über den PCIe-Bus gesendet werden, sondern eine enorme Menge an Leistung von der GPU. Ein weiteres extremes Beispiel für einen GPU-Engpass bei minimaler Belastung des PCIe-Busses gibt es in vielen nicht spielenden GPU-Arbeiten wie Data Science und Crypto Currency Mining. Wenn Sie einen Moment Zeit haben, sich ein Crypto Currency Mining-Motherboard anzusehen, werden Sie feststellen, dass viele über ein Dutzend oder mehr 1x PCIe 2.0-Steckplätze verfügen, um mit leistungsstarken GPUs zu füllen. Obwohl jeder dieser Steckplätze 1/8 der Bandbreite selbst unserer niedrigen AGA hat, ist die 1x PCIe 2.0-Schnittstelle nicht einmal ein Engpass für die leistungsstärksten GPUs, wenn es um Crypto Mining geht. Der PCIe-Bus wird in diesem Fall nur zum Senden von Rohdaten zum Füllen des VRAM verwendet, den die GPU langsam verarbeitet. Die Rate, mit der die GPU die in ihrem VRAM gespeicherten Daten verarbeitet, ist langsamer als selbst die miserable Bitrate einer 1x PCIe 2.0-Schnittstelle. Auch hier besteht kein Engpass.

Wo der AGA-Engpass besteht
Wenn in den oben genannten Extremfällen kein AGA-Engpass besteht, wo ist dann ein AGA-Engpass am ausgeprägtesten? Die Antwort ist, wo weder die GPU noch die CPU selbst Engpässe aufweisen, die von der CPU zur GPU übertragenen Informationen jedoch hoch bleiben. Eine solche Situation besteht, wenn relativ moderne Titel mit niedrigen Auflösungen und außergewöhnlich hohen Bildraten abgespielt werden. Dies ist der Fall bei Esport-Titeln, bei denen ein Spiel wie CS: GO immer noch regelmäßig mit sehr niedrigen Auflösungen von 1024 x 768 gespielt wird. Sogar diese Spiele wurden auf diesen wettbewerbsfähigen Grafiken gespielt

 

  • Like 2

Share this post


Link to post
Share on other sites

das hört sich doch gut an und sagt mir, das ich mir nen wqhd externen Monitor noch anschaffen werde  :)

Bin mir nur noch sicher ob 3070 oder 2080 super oder 2080 ti. Aber man muss die Preise abwarten weil mehr wie 450 Euro will ich nicht ausgeben.

Edited by hysterix

Share this post


Link to post
Share on other sites

So viel zum Thema "zukunftssicher"... Ich beobachte die Diskussionen schon länger im NBRF und scheinbar häuft sich das Problem mit dem AGA und den neuen Grafikkarten. -Link-

Schon bisschen peinlich wenn die hauseigene Lösung mit den neuen Karten nicht funktioniert, aber irgendwelche eGPU's über TB selbst mit einer RTX 3090 problemlos laufen. Insbesondere da AW eigentlich der Vorreiter war, was die eGPU-Lösung angeht.

Share this post


Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Sign in to follow this  

×
×
  • Create New...

Important Information

Please note following information: Terms of Use, Privacy Policy and Guidelines. We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.