eGPU

DESKTOP-CLASS PERFORMANCE
The Razer Core X series features support for the latest PCIe desktop graphics cards including NVIDIA® GeForce® and AMD XConnect™ enabled Radeon™ cards. Highly mobile developers can now harness the power of compatible NVIDIA® Quadro® cards and AMD Radeon™ Pro cards for professional graphics performance.

JUST PLUG AND PLAY
After setup is completed with your laptop, the Razer Core X eGPUs are plug and play when connected with compatible laptops for quick connection to your game session. The single Thunderbolt cable connection unlocks desktop graphics. Step up to the Razer Core X Chroma™ to effortlessly connect to desktop peripherals, Gigabit Ethernet, and Razer Chroma™ RGB lighting.

NCREDIBLY FAST SPEEDS
The Thunderbolt™ 3 (USB-C) connection between a laptop and Razer Core X series yields incredibly fast speeds, while offering a standard connection to various systems. The unique dual-chip design of the Razer Core X Chroma effectively handles both graphic and peripheral data through a single Thunderbolt 3 cable to the laptop.

ERSATILE COMPATIBILITY
The Razer Core X external graphic enclosure is incredibly versatile and compatible with Thunderbolt™ 3 systems running Windows 10 RS5 or later and Macs running macOS High Sierra 10.13.4 or later. Laptops require a Thunderbolt 3 port with external graphics (eGFX) support.

FUTURE PROOF
Stay at the top of your game by keeping your performance maxed. Razer Core X series lets you easily upgrade your graphics card, so you can instantly give your laptop a boost.

STAY COOL ALWAYS
Perfectly sized to fit your setup, the Razer Core X series feature additional cooling and open vents in its aluminum body for optimized thermal performance.

Turing GPU Architecture
Based on state-of-the-art 12nm FFN (FinFET NVIDIA) high-performance manufacturing process customized for NVIDIA to incorporate 896 CUDA cores, the NVIDIA T1000 GPU is the most powerful Single Slot professional solution for CAD, DCC, financial service industry (FSI) and visualization professionals in general looking to reach excellence performance in a compact and efficient form factor. The Turing GPU architecture enables the biggest leap in computer real-time graphics rendering since NVIDIA’s invention of programmable shaders in 2001.

Advanced Shading Technologies
The Turing GPU architecture features the following new advanced shader technologies.
Mesh Shading: Compute-based geometry pipeline to speed geometry processing and culling on geometrically complex models and scenes. Mesh shading provides up to 2x performance improvement on geometry-bound workloads.
Variable Rate Shading (VRS): Gain rendering efficiency by varying the shading rate based on scene content, direction of gaze, and motion. Variable rate shading provides similar image quality with 50% reduction in shaded pixels.
Texture Space Shading: Object/texture space shading to improve the performance of pixel shader-heavy workloads such as depth-of-field and motion blur. Texture space shading provides greater throughput with increased fidelity by reusing pre-shaded texels for pixel-shader heavy VR workloads.

Advanced Streaming Multiprocessor (SM) Architecture
Combined shared memory and L1 cache improve performance significantly, while simplifying programing and reducing the tuning required to attain best application performance. Each SM contains 96 KB of L1 shared memory, which can be configured for various capabilities depending on compute or graphics workload. For compute cases, up to 64KB can be allocated to the L1 cache or shared memory, while graphics workload can allocate up to 48 KB for shared memory; 32 KB for L1 and 16KB for texture units. Combining the L1 data cache with the shared memory reduces latency and provide higher bandwidth.

Higher Speed GDDR6 Memory
Built with Turing’s vastly optimized 4GB GDDR6 memory subsystem for the industry’s fastest graphics the NVIDIA T1000 features 4GB of frame buffer capacity and 160 GB/s of peak bandwidth for double the throughput from previous generation. NVIDIA T1000 is the ideal platform for 3D professionals and high demanding with vast arrays of datasets and multi display environments.

Single Instruction, Multiple Thread (SIMT)
New independent thread scheduling capability enables finer-grain synchronization and cooperation between parallel threads by sharing resources among small jobs.

Mixed-Precision Computing
Double the throughput and reduce storage requirements with 16-bit floating point precision computing to enable the training and deployment of larger neural networks. With independent parallel integer and floating-point data paths, the Turing SM is also much more efficient on workloads with a mix of computation and addressing calculations.

Graphics Preemption
Pixel-level preemption provides more granular control to better support time-sensitive tasks such as VR motion tracking.

Compute Preemption
Preemption at the instruction-level provides finer grain control over compute tasks to prevent long-running applications from either monopolizing system resources or timing out.

H.264 and HEVC Encode/Decode Enginesi
Deliver faster than real-time performance for transcoding, video editing, and other encoding applications with two dedicated H.264 and HEVC encode engines and a dedicated decode engine that are independent of 3D/compute pipeline.

NVIDIA GPU BOOST 4.0
Automatically maximize application performance without exceeding the power and thermal envelope of the card. Allows applications to stay within the boost clock state longer under higher temperature threshold before dropping to a secondary temperature setting base clock.

NVIDIA Ampere Architecture
NVIDIA RTX A4000 is the most powerful single slot GPU solution offering high performance real-time ray tracing, AI-accelerated compute, and professional graphics rendering. Building upon the major SM enhancements from the Turing GPU, the NVIDIA Ampere architecture enhances ray tracing operations, tensor matrix operations, and concurrent executions of FP32 and INT32 operations.

2nd Generation RT Cores
Incorporating 2nd generation ray tracing engines, NVIDIA Ampere architecture-based GPUs provide incredible ray traced rendering performance. A single RTX A4000 board can render complex professional models with physically accurate shadows, reflections, and refractions to empower users with instant insight. Working in concert with applications leveraging APIs such as NVIDIA OptiX, Microsoft DXR and Vulkan ray tracing, systems based on the RTX A4000 will power truly interactive design workflows to provide immediate feedback for unprecedented levels of productivity. The RTX A4000 is up to 2X faster in ray tracing compared to the previous generation. This technology also speeds up the rendering of ray-traced motion blur for faster results with greater visual accuracy.

CUDA Cores
The NVIDIA Ampere architecture-based CUDA cores bring up to 2.7X the single-precision floating point (FP32) throughput compared to the previous generation, providing significant performance improvements for graphics workflows such as 3D model development and compute for workloads such as desktop simulation for computer-aided engineering (CAE). The RTX A4000 enables two FP32 primary data paths, doubling the peak FP32 operations.

3rd Generation Tensor Cores
Purpose-built for deep learning matrix arithmetic at the heart of neural network training and inferencing functions, the RTX A4000 includes enhanced Tensor Cores that accelerate more datatypes and includes a new Fine-Grained Structured Sparsity feature that delivers up to 2X throughput for tensor matrix operations compared to the previous generation. New Tensor Cores will accelerate two new TF32 and BFloat16 precision modes. Independent floating-point and integer data paths allow more efficient execution of workloads using a mix of computation and addressing calculations.

PCIe Gen 4
The RTX A4000 supports PCI Express Gen 4, which provides double the bandwidth of PCIe Gen 3, improving data-transfer speeds from CPU memory for data-intensive tasks like AI and data science.

Higher Speed GDDR6 Memory
Built with 16GB GDDR6 memory delivering up to 23% greater throughput for ray tracing, rendering, and AI workloads than the previous generation. The RTX A4000 provides the industry’s largest graphics memory footprint to address the largest datasets and models in latency-sensitive professional applications.

5th Generation NVDEC Enginei
NVDEC is well suited for transcoding and video playback applications for real-time decoding. The following video codecs are supported for hardware-accelerated decoding: MPEG-2, VC-1, H.264 (AVCHD), H.265 (HEVC), VP8, VP9, and AV1.

7th Generation NVENC Engine
NVENC can take on the most demanding 4K or 8K video encoding tasks to free up the graphics engine and the CPU for other operations. The RTX A4000 provides better encoding quality than software-based x264 encoders.

Error Correcting Code (ECC) on Graphics Memory
Meet strict data integrity requirements for mission critical applications with uncompromised computing accuracy and reliability for workstations.

Graphics Preemption
Pixel-level preemption provides more granular control to better support time-sensitive tasks such as VR motion tracking.

Compute Preemption
Preemption at the instruction-level provides finer grain control over compute tasks to prevent long-running applications from either monopolizing system resources or timing out.

NVIDIA RTX IO
Accelerating GPU-based lossless decompression performance by up to 100x and 20x lower CPU utilization compared to traditional storage APIs using Microsoft’s new DirectStorage for Windows API. RTX IO moves data from the storage to the GPU in a more efficient, compressed form, and improving I/O performance.

As part of the NVIDIA RTX™ family of professional GPUs, the NVIDIA T600 provides the performance, features, reliability, and support that enterprise customers have come to expect from NVIDIA professional visual computing solutions. As businesses look to lower the total cost of their computing solutions, the NVIDIA T600 is a powerful, cost-effective solution that helps them stay within budget, without sacrificing productivity. Step up to the power of an NVIDIA discreet professional GPU.

Turing GPU Architecture
Based on state-of-the-art 12nm FFN (FinFET NVIDIA) high-performance manufacturing process customized for NVIDIA to incorporate 896 CUDA cores, the NVIDIA T600 GPU is the most powerful Single Slot professional solution for CAD, DCC, financial service industry (FSI) and visualization professionals in general looking to reach excellence performance in a compact and efficient form factor. The Turing GPU architecture enables the biggest leap in computer real-time graphics rendering since NVIDIA’s invention of programmable shaders in 2001.

Advanced Streaming Multiprocessor (SM) Architecture
Combined shared memory and L1 cache improve performance significantly, while simplifying programing and reducing the tuning required to attain best application performance. Each SM contains 96 KB of L1 shared memory, which can be configured for various capabilities depending on compute or graphics workload. For compute cases, up to 64KB can be allocated to the L1 cache or shared memory, while graphics workload can allocate up to 48 KB for shared memory; 32 KB for L1 and 16KB for texture units. Combining the L1 data cache with the shared memory reduces latency and provide higher bandwidth.

Single Instruction, Multiple Thread (SIMT)
New independent thread scheduling capability enables finer-grain synchronization and cooperation between parallel threads by sharing resources among small jobs.

Graphics Preemption
Pixel-level preemption provides more granular control to better support time-sensitive tasks such as VR motion tracking.

H.264 and HEVC Encode/Decode Enginesi
Deliver faster than real-time performance for transcoding, video editing, and other encoding applications with two dedicated H.264 and HEVC encode engines and a dedicated decode engine that are independent of 3D/compute pipeline.

Advanced Shading Technologies
The Turing GPU architecture features the following new advanced shader technologies.
Mesh Shading: Compute-based geometry pipeline to speed geometry processing and culling on geometrically complex models and scenes. Mesh shading provides up to 2x performance improvement on geometry-bound workloads.
Variable Rate Shading (VRS): Gain rendering efficiency by varying the shading rate based on scene content, direction of gaze, and motion. Variable rate shading provides similar image quality with 50% reduction in shaded pixels.
Texture Space Shading: Object/texture space shading to improve the performance of pixel shader-heavy workloads such as depth-of-field and motion blur. Texture space shading provides greater throughput with increased fidelity by reusing pre-shaded texels for pixel-shader heavy VR workloads.

Higher Speed GDDR6 Memory
Built with Turing’s vastly optimized 4GB GDDR6 memory subsystem for the industry’s fastest graphics the NVIDIA T600 features 4GB of frame buffer capacity and 160 GB/s of peak bandwidth for double the throughput from previous generation. NVIDIA T600 is the ideal platform for 3D professionals and high demanding with vast arrays of datasets and multi display environments.

Mixed-Precision Computing
Double the throughput and reduce storage requirements with 16-bit floating point precision computing to enable the training and deployment of larger neural networks. With independent parallel integer and floating-point data paths, the Turing SM is also much more efficient on workloads with a mix of computation and addressing calculations.

Compute Preemption
Preemption at the instruction-level provides finer grain control over compute tasks to prevent long-running applications from either monopolizing system resources or timing out.

NVIDIA GPU BOOST 4.0
Automatically maximize application performance without exceeding the power and thermal envelope of the card. Allows applications to stay within the boost clock state longer under higher temperature threshold before dropping to a secondary temperature setting base clock.

DESKTOP-CLASS PERFORMANCE
The Razer Core X series features support for the latest PCIe desktop graphics cards including NVIDIA® GeForce® and AMD XConnect™ enabled Radeon™ cards. Highly mobile developers can now harness the power of compatible NVIDIA® Quadro® cards and AMD Radeon™ Pro cards for professional graphics performance.

JUST PLUG AND PLAY
After setup is completed with your laptop, the Razer Core X eGPUs are plug and play when connected with compatible laptops for quick connection to your game session. The single Thunderbolt cable connection unlocks desktop graphics. Step up to the Razer Core X Chroma™ to effortlessly connect to desktop peripherals, Gigabit Ethernet, and Razer Chroma™ RGB lighting.

NCREDIBLY FAST SPEEDS
The Thunderbolt™ 3 (USB-C) connection between a laptop and Razer Core X series yields incredibly fast speeds, while offering a standard connection to various systems. The unique dual-chip design of the Razer Core X Chroma effectively handles both graphic and peripheral data through a single Thunderbolt 3 cable to the laptop.

VERSATILE COMPATIBILITY
The Razer Core X external graphic enclosure is incredibly versatile and compatible with Thunderbolt™ 3 systems running Windows 10 RS5 or later and Macs running macOS High Sierra 10.13.4 or later. Laptops require a Thunderbolt 3 port with external graphics (eGFX) support.

FUTURE PROOF
Stay at the top of your game by keeping your performance maxed. Razer Core X series lets you easily upgrade your graphics card, so you can instantly give your laptop a boost.

STAY COOL ALWAYS
Perfectly sized to fit your setup, the Razer Core X series feature additional cooling and open vents in its aluminum body for optimized thermal performance.