The U.S. microchip export restrictions put in place last year to halt China’s creation of supercomputers used to create nuclear bombs and ChatGPT are barely making a dent in the country’s IT industry.The regulations banned shipments of Nvidia Corp. (NVDA.O) & Advanced Micro Devices Inc. (AMD.O) chips, which have evolved into the benchmark for building chatbots and other artificial intelligence (AI) systems across the globe.However, Nvidia has made lesser variations of its chips for the Chinese market in order to see eye-to-eye with U.S. regulations. Industry insiders claimed the Nvidia H800, which was introduced in March, is expected to be 10% to 30% slower to do various AI tasks and may cost twice as much as Nvidia’s top U.S. chips.For Chinese companies, even the slower Nvidia chips are an improvement. Tencent Holdings (0700.HK), among China’s largest internet companies, forecast in April that the systems using Nvidia’s H800 will cut the time it takes to train its largest AI system by more than half, from 11 days to four days. The back-and-forth between the two parties demonstrates how challenging it is for the US to contain Chinese technological growth without hurting US companies.The United States’ goal in establishing the limits was in part to prevent a shock that would cause the Chinese to completely stop using American chips and step up their own chip-development activities.One chip industry executive who asked to remain anonymous to discuss private meetings with regulators said that they needed to draw the line somewhere and that they would face the challenge of how to do so without being immediately disruptive while also gradually weakening China’s capabilities.There are two aspects to the export limitations. The first imposes a cap on a chip’s capacity to compute extremely accurate numbers, a move intended to restrict the use of supercomputers for military research. Sources in the chip sector stated that was a successful move.However, in AI tasks like complex language models, where the volume of data the chip can process is more crucial, calculating extremely precise figures is less important.Although it has not yet begun shipping the chips in large quantities, Nvidia has begun marketing the H800 to China’s biggest technology companies, including Tencent, Baidu Inc (9888.HK), and Alibaba Group Holding Ltd (9988.HK) for use in such work.According to a statement released by Nvidia last week, the government doesn’t want to hurt competition or American business and permits American companies to supply goods for commercial operations like offering cloud services to customers.China is a significant market for American technology, it was emphasised.On a different note this week, Bill Dally, the chief scientist of Nvidia, predicted that as training requirements tend to double every 6 -12 months, the gap will widen quickly over time.The U.S. Commerce Department has a Bureau of Industry & Security, which is in charge of enforcing the regulations, did not respond to a request for a remark. AI is impacted by the second U.S. restriction on chip-to-chip transmission speeds. The models that underlie innovations like ChatGPT are too big to fit on a single chip. Rather, they must be dispersed among a large number of chips—often thousands at once—which must all interact with one another.Performance information for the Nvidia H800 chip, which is only accessible in China, has not been released by the company, but a specification sheet obtained reveals a chip-to-chip speed of 400 GB/s, which is just under half the peak speed of 900 GB/s for Nvidia’s flagship H100 chip, which is sold outside of China.That speed, according to some in the AI industry, is still sufficient. A 10–30% system delay was reported by Naveen Rao, CEO of a business called MosaicML which specialises in assisting AI models to perform better on constrained hardware.Money is beneficial. A chip made in China that completes an AI training task twice as quickly as a chip made in the United States can nonetheless complete the task.Additionally, AI researchers are working to reduce the size of the enormous systems they have constructed in order to lower the cost of developing products like ChatGPT and other procedures. These will use fewer chips, which will lessen chip-to-chip interaction and the effect of US speed limits.