Home Innovation Can optical transceivers transform data transmission by 2030?

Can optical transceivers transform data transmission by 2030?

30
0

This interview is an excerpt from our report titled “Optical transceivers: The new gold standard in data communications by 2030.”

As 5G accelerates data transmission, optical transceivers are evolving to meet the demands for higher efficiency and speed. However, with the increase in data flow, there’s a critical need to balance high-speed performance with energy conservation to mitigate the looming challenges of energy consumption in data centers. 

Prescouter talked with Simon Reissmann, a senior principal applications engineer at Form Factor. Simon is leading the way in making probing and test stations for the semiconductor industry. Simon discussed the pivotal role of optical transceivers, their current capabilities, and the challenges and opportunities that lie ahead in the next 3-5 year.

This interview covers the following topics:

  • Role of optical transceivers and data transmission
  • Designing transceivers for ultra-fast speeds
  • Trades-off in data transmission 
  • The importance of form factors
  • Innovation strategies and promising optical transceiver development
  • Companies innovating in the optical transceiver space
  • Potential roadblocks for novel transceiver development
  • Return on investment in ultra-fast optical transceivers

Role of optical transceivers and data transmission:

Q: How do you see the role of optical transceivers and data transmission in the next 3-5 years?

A: I think the key component here or the keywords, I would say, is 5G which already has been deployed. And what people need to realize, 5G is known as a wireless communication technology. It’s basically a radio that you have a 5G receiver in your cell phone and you get faster data rates, but what people need to understand is that there needs to be a backbone to this.

So, if there are radio towers everywhere that make my cell phone work at 5G speeds, these radio towers are connected to data centers usually with optical fiber communications. With the advent of 5G and higher data rates, the data center space and optical transceivers will also massively grow. So, that’s a big growth opportunity there and we see that happening and that’s why the market there is taking off.

And the other big challenge or development here is that there are forecasts for energy consumption on these data centers that process the data. It is predicted that by 2030, data centers will consume 20% of the world’s energy, which is huge. 

So, it’s imperative that the big tech companies and the data center companies start thinking about how customers save energy. There are wild ideas like putting data centers under sea level to cool them and basically use the energy of the sea to cool it. 

And one big component is also that optical transceivers or high-speed transmission of data is done more energy efficient with optics than with electrical data transmission lines or cables. So, I can transmit higher data rates longer without having to repeat them at a lower wattage per bit essentially. And that’s why optical transceivers are one major area of research and interest for all the data center companies and the big tech companies, I believe. 

Ultra-fast speeds, trade-offs, and form factors:

Q: In your perspective, what needs to be done to design transceivers to achieve speeds beyond 100 Gbps?

A: Nothing anymore. They already do. So, I was just at the Optical Fiber Communication Conference, two weeks ago and there the top rate transceivers that you can currently buy on the market are transmitting at 800G. So, they are already at those speeds. We are already thinking about terabit speeds. 

So, again, I think speed is not the main component. It’s really the efficiency of these transceivers, meaning at what speed or the bit to watt ratio is important. So, how much throughput can I have? I could maybe make a 1-terabit transceiver, but if it is consuming twice as much energy as an 800-bit transceiver then that is not really efficient or cost-efficient even for the data centers. 

So, I would advise to not only look at the data rate itself, which is certainly interesting and is important but also how many watts does each of these thousands and ten thousands of transceivers and the data centers really consume.

It’s really looking at the gas monitor of a car. Rather than saying how fast can I make this car, I can make a Formula 1 car but it’s not terribly efficient. And that comes into play with economics. 

Q: How important is the speed to distance trade-off in transmitting data?

A: The question goes somewhat in a similar direction, meaning if I want to transmit faster speeds through a fiber, I have a penalty and I can’t transmit it as long that fast. Usually, what we see for optical transceivers in the data space, so in these really huge data centers, I think the longest distance that I have heard of that is relevant is 10 km. That’s about 6.2 miles, so from one end of the data center to the other. So, these buildings are huge. They’re really like cities. 

And I think that for these distances, if it’s not ultra-distant from larger than 10 km, 100 km or longer, the trade-off there is not as big as one might think, meaning the difference between going 1 km to 10 km isn’t very marginal. I think it becomes more relevant if you go over 25 miles or so.

Then there will be a trade-off and then you want to start thinking about, okay, do I need to throttle down my speed so that I can get further. But for the data center space for the optical transceivers that we are talking about, not as relevant, I think.

Q: How important are form factors into our protocols in the context of data transfer, speed or distance?

A: Very important. That goes back into energy consumption and the whole space. 

So, if you think of an optical transceiver traditionally, there are two parts of it. There’s an electrical processing unit that spits out the bits at a certain data rate and then they feed into a second component. Hence, there’s a transmission electrically into a laser essentially that then puts these bits out and power on and off, so like Morse code more or less. 

And one way to throttle down this energy consumption of these transceivers and the form factors is to bring these things closer together to each other, so that we don’t have electrical transmission into this optical part anymore or have a very limited range.

And that’s where integrated photonics comes into play where we’re not making two chips anymore, one with the electrical processing unit and the other with the laser but we’re trying to put them on one chip to minimize the space there and the energy consumption. So, making these pieces smaller, the chip smaller, and the form factor is a major part of reducing the energy consumption.

Recently a colleague of mine went through the development of the form factors indicating that the pluggables that you put into the server have shrunk and at the same time have reduced their energy and increased the rate of speed at the same time.

Innovations and promising optical transceiver development:

Q: How will companies innovate to be able to keep up with the demand of increasing data communications in the future?

A: That’s a look in the glass bowl question. I think they’re looking at silicon photonics or integrated photonics, integrating photonics into the chips rather than making them, so really miniaturizing everything. And then there are different technologies on how you can transmit data.

For example, you could not only use one wavelength to shoot through one fiber, but you can use technology called WDMX, so that’s wavelength-division multiplexing. This means instead of one layer that shoots out the light, they have four different ones with a slightly different wavelength that have only a 5-nm difference or you might have 1305, 1310, 1315 and 1320 nm.

You shoot that in one fiber, and then on the other end, on the receiver end, they can split up these wavelengths again, meaning they divide it up and they can basically distinguish between 1305 and 1310. 

And then you have four different data streams through the single fiber. That’s one way you cannot lay in your fiber but basically change the transceiver and the receiver to push four times as much data through it. That is just one example out of many, many. 

There’s also coherent optical, which companies like Ciena are doing that has traditionally been used for longer distance transmission, so really hundreds of kilometers or miles. That is coherent optical where you shoot light through a fiber that’s the same wavelength, but it has different polarization.

So, that’s basically how the magnetic wave, I don’t know if you’re familiar with that but magnetic waves, they have an orientation and you can send different polarization orientations through these fibers. And this can increase the data throughput through a fiber. 

This coherent technology, like I said, traditionally was more longer distance, but it is moving down and I know that some companies are thinking in the data center space to use the same technology for shorter distances. It’s more expensive, but it might be beneficial in terms of throughput. Those are just two examples that people work on, but that space really is so huge. 

Q: Which technologies do you think are most promising?

A: I think from a cost perspective, silicon photonics, meaning integrated photonics. That’s the most promising way to reduce power consumption. 

I think that coherent optics and wavelength-division multiplexing are also viable. Wavelength-division multiplexing less so, but coherent optics come with expensive optical parts and the transceiver and receiver that are more expensive than the traditional technologies, so they have the potential to increase capacity but they might also be economically not feasible and that might make the transceivers way more expensive. 

Another thing we haven’t really talked about is maybe that’s more an add-on to the previous question is that there are, since five, six years now, different ideas on how you can modulate the signal itself. So, traditionally, they are called NRZ signals that really are just power on or power off. Power on is 1, and power off is 0. So, you basically just have a signal that goes up and down and now there is research going on into PAM4. 

So, energy is really PAM2, which is pulse amplitude modulation. And now there’s a technology called PAM4, which is pulse amplitude modulation 4 or quaternary levels. This means you don’t only distinguish between on and off, you distinguish between okay, the highest power level might be 1 watt, the lowest is noise floor, is 200 mW. 

Maybe 0.3 and 0.6 W are also levels there. So, you’re basically trying to cram more bits in at the same baud rate. But that makes these transceivers more sensitive to error. So, there’s a trade-off. You can try that, but then the transceivers have to be more meticulously designed to distinguish between noise and actual signal. 

Q: What about advanced digital signal processing, DSP?

A: Yeah, that goes into what I mentioned, the PAM4. So, that’s really just post processing after the transceiver receives the signal. 

And then if you have PAM4 or pulse amplitude modulation 4, they do more extensive DSP on those signals because it is harder to distinguish between what noise is and what the actual signal is. Or it’s harder to distinguish the signal levels. Traditionally, you have only a 0 and 1 or a high and a low. That’s easier. Now, you have two levels in between. 

Yes, that’s a good point, you can push more data through a fiber by using PAM4, but it probably does come with a cost of more DSP or more pulse processing in the aftermath after you have received the signal and that might also increase the cost a little bit.

So, it’s not going away. As these modulation schemes evolve, there are even thoughts about PAM8 for really small distances like 10 meters or 15 meters, so even more levels. And that DSP will become more important. 

Q: What are  your ideas on technologies that are promising? 

A: Combs. I probably have to Google that term, but I assume what we are talking about is the wavelength-division multiplexing. 

So, I guess that’s where it comes from. That makes sense that you would call it that way. So, if you send, like I said, multiple wavelengths through the same fiber, before you put them in the fiber, you have to combine these or basically fuse them into one fiber. 

And then on the other side, you have to split them back up again. I guess you could call that combing or combing it up or basically splitting it up from one signal into four or more.

So, these combs would enable more throughput with multiple wavelengths, four or eight, that’s what I’m aware of, through these fibers. But it’s an optical component that has to be physically designed pretty accurately and adds cost to the transceivers and they’re pretty sensitive. 

And sometimes in terms of robustness, if you would drop such a transceiver on the floor or something like that, you could disturb this opto-mechanical system and then it might malfunction.

So, interesting technology, enables more throughput, might not be as robust and it’s certainly adding cost rather than PAM4, which is the lowest cost option because you don’t really have to change much in terms of the mechanical design of the system. 

And another point might also be for what speaks against WDM or these combs would be that it would add to the form factor of the systems. So, I don’t know how much size that adds to an optical pluggable transceiver.

Q: What are your thoughts on external cavity lasers?

A: Okay. So, traditionally, those are called VCSELs, vertical-cavity surface-emitting lasers, and then there are also something called VECSELs. I actually worked on that as an undergrad. They have an external mirror. Or how do I describe this best? 

So, you usually have a laser that has an active region and a DDR or a mirror on one side and on the other. And one has 99.9999999 reflectivity and the other one has 95. And then you have a standing wave in the laser, and in that active region, the photons that go through that get multiplied. So, you’re basically multiplying this laser. 

If you have an external-cavity surface-emitting laser, you have an external cavity, which is adjustable, meaning you could change the wavelength slightly and you could change the power output, so you have a little bit more adjustment capabilities. I don’t know but I could imagine that they’re used for VWM systems. 

Interesting technology, again I would worry about it’s an external mirror, one mirror is fixed on the chip with the active region and an external mirror. I would worry about the form factor, meaning how large they are and how mechanically stable that whole thing is.

But it would probably enable adjustment of optical parameters in terms of temperature because you can adjust that mirror and play with this a little bit and you could also adjust the wavelengths. So, that’s what that enables. But there are also downsides in terms of form factor. 

Companies innovating in the optical transceiver space:

Q: Which companies in your opinion are innovating in this space?

A: Many that we work with. They’re big players: Lumentum, Luxtera. I think there was a company that was acquired by Analog Devices and then they got bought by another company whose name I can’t find right now in my head. 

I know that even the big companies like Facebook and Google have their own hardware units now that make optical transceivers. So, you would think why are they doing this? Don’t they just write code? Why do they care about hardware? 

But they are really dependent on these data centers and one way of reducing cost for them is making themselves a little bit more independent of these individual suppliers and dropping costs. 

So, I know that there are ideas that Google, for example, is off making essentially open-source optical transceivers. So, they would design them and they would make them not patented and then make it available to everyone. So, similar thinking as on the software side with Linux and so on. 

So, I would keep the big tech companies like Google, Facebook and Apple even in mind for what you don’t traditionally necessarily think of as optical transceiver suppliers. But they’re working hard there too. 

Q: How are these companies innovating in this space?

A: Beside what I mentioned before, experimenting with silicon photonics to miniaturize the transceivers, to try to make things open source and basically change baud rates or modulation schemes in the transceivers to push more bits through the same fiber. 

Potential roadblocks for novel transceiver development:

Q: What do you think are the potential technology roadblocks for developing novel transceivers to achieve ultra-fast speeds and distance?

A: That’s a good question. I think at some point probably the electrical processing units that feed into the optical lasers. So, how fast can they switch up and down? And then another thing that you have to understand or why we talk about optical transceivers and all is that I think I mentioned that earlier, it’s really hard to make electrical systems on copper.

We can send really high frequency or data rates through for a long time or for large distances because if I send a 100-gigabit signal through a copper trace on a PCB, I can send it for about 5, 6, 7, 8 inches before the signal degrades so much that you will have trouble to distinguish what you’re looking at. 

While with optics, you can send them for a kilometer or a mile and not have that same problem. It’s just way less loss by a factor of a thousand in an optical fiber than on a PCB trace.

So far, that isn’t really a problem at data rates of 800 gigabit per second or so. But if you go even higher, you will run into this problem and that’s why everyone works on this miniaturization that you can’t even feed the optical laser with a good enough signal so that it can distinguish what 1’s and 0’s or how fast, or with good enough integrity essentially.

And that’s a big roadblock, really that’s why everyone works on silicon photonics. Put the electrical part and the optical part as close as possible together, meaning even on the same chip. This is because there’s just this physical limitation that if I want to increase the data rates more and more, at some point I can’t send it that far anymore. So, it needs to be really close together. 

And the other limitation is probably more market limitation that is right now there’s a lot of innovation. There’s a lot of Wild West, a lot of good ideas and R&D that’s going on. But at some point, all this new innovation has to be standardized.

If I buy an optical transceiver, it needs to work on a Cisco server and a competitor HP server. So, at some point, if you want to make it scalable, you have to standardize what’s happening. And right now, there’s a lot of great ideas in different technologies but it’s a little bit Wild West.

In order for this to really take off and work at some point, five, six big players have to say “Okay, this is how we do it.” And I don’t think we’re there yet. 

Return on investment in ultra-fast optical transceivers:

Q: In your view, a player investing in research and development of ultra-fast optical transceivers, would they be able to see a potential return on investment within the near future, say 5-7 years?

A: Yes, definitely. I think if you look at the growth rates, we’re looking going from 4G to 5G, you’re talking about a thousand times higher data volume that gets transmitted from your iPhone to a data center. So, there’s huge potential even in the near term. I would look to even up to 2030; this is not going to go away. There’s a big payoff and that’s why everyone is working on it. 

Q: How in your perspective can companies differentiate themselves to stay ahead of their competitors in the silicon photonics market?

A: By paying more attention to the energy efficiency of their systems, I think. That might also go into roadblocks a little bit. Really, it’s not clear with that increase in expected data volume can we keep up with the energy efficiency of the transceivers because at some point, Google or the people that run these big data centers have to pay the energy bill, and even politically, they need to cater to the renewable energy politics that is certainly coming in the next 10, 15 years.

So, I think they need to find a good balance between high-speed data rates but also what is the cost in terms of watt per bit. That’s how you win. I think just focusing on who can do it fast is not enough. You need to be the most efficient.

The bottom line:

The interview with Simon Reissmann highlights the ongoing evolution of optical transceiver technologies in response to escalating data demands. With the advent of 5G and the relentless pursuit of ultra-fast speeds, optical transceivers emerge as linchpins in the infrastructure of digital connectivity. Yet, as we march forward, it is not merely speed that dictates success, but rather the delicate balance between speed, efficiency, and sustainability. 

Companies investing in research and development, particularly in silicon photonics, are primed to reap substantial returns on their investments in the near future. However, differentiation in this competitive market will hinge on a commitment to push the boundaries of speed and prioritize energy efficiency and environmental responsibility.

Disclaimer: Comments and opinions expressed by interviewees are their own and do not represent or reflect the opinions, policies, or positions of PreScouter or have its endorsement.

If you have any questions or would like to know if we can help your business with its innovation challenges, please contact us here or email us at solutions@prescouter.com

Previous articleA Comprehensive Guide for Developers
Next articlePros and Cons of Bitcoin vs. Bitcoin ETF

LEAVE A REPLY

Please enter your comment!
Please enter your name here