I think the combination, especially in AI systems where accuracy doesn't need to be perfect and can be modeled in analog, can perfectly comeback.
i.e. If some circuitry is interfacing with an analog counterpart where 16-bit precision is enough at the digital side and the analog circuit can produce outputs matching that accuracy much more efficiently than digital while still being able to be adjusted externally by the digital, why not?
I think it can open up some new doors in AI, though what excites me is that what else other than AI could it be applied to? Circuit simulations? Signal generation? Perhaps some analog "programming languages"?
Analog has a limited dynamic range, in addition to its well-known problems with noise, sensitivity to random variation within devices or the in surrounding environment, etc. Most AI systems are using floating point compute because of its much greater dynamic range compared to fixed point, even though the latter is way more efficient.
For ML inference, the dynamic range and rounding noise of the 16-bit or 8-bit floating-point numbers preferred now for this application is well within the analog capabilities.
Analog processing cannot compete with 32-bit floating-point numbers, when their precision and/or dynamic range is required.
The power requirements for analog processing depend a lot on the required signal-to-noise ratio (SNR) and dynamic range, like also for digital processing.
Increasing the SNR for digital processing requires the processing of more bits, which means either the multiplication of digital blocks such as adders, or the increase of the clock frequencies, when the existing digital blocks must process more bits during the same time, in serial manner. Both methods increase the power consumption.
Increasing the SNR for analog processing means the use of more complex circuits, with more devices, and the use of larger devices, e.g. larger transistors, larger capacitors, larger resistors, so that the effect of various noise sources will be relatively smaller (due to averaging). This also increases the power consumption.
The relative power consumptions for analog and digital processing depend a lot on the technologies used, but in general analog processing needs much less power when the requirements for SNR/dynamic range are low, but the power increases rapidly with increased requirements and at some threshold the power needed becomes larger than for implementing the same task by digital processing.
In modern CMOS technologies designed for making digital devices, the analog components have become worse and worse in comparison with old technologies. That has moved the threshold where digital processing becomes preferable lower and lower.
For example, because of this, at every smartphone generation where a new CMOS process is used, an important part of the migration of the previous designs from the old process to the new process is to identify some old analog blocks whose functions could be implemented digitally in the new process, so that those analog blocks could be replaced by digital blocks, because otherwise the new chip could be worse than the old due to worse analog parts, even if the digital parts have been improved.
The parent article makes the point that this trend which was continuous for the last few decades can be reversed now due to the advances in packaging and interconnection technologies, which can make a modern multi-chip device to be similar in price to a monolithic device.
If the restriction of using a single chip is removed, then it is no longer necessary to use the bad analog components provided by the modern CMOS technologies but it is possible to use a separate analog chip, made with an appropriate manufacturing process.
In this case the threshold where digital processing becomes preferable to analog processing can be moved again to higher SNR/dynamic range values, where it was many years ago.
I've seen some very limited discussion about new modes of analog processing. For example, you can generatively produce an FPGA bitstream with a genetic algorithm that does strange things. E.g. totally disconnected logic components evolving such that they influence the operation of each other via RF leakage within the die. There are no doubt other methods. Constructions which model neurons are also a possibility, as are other things.
A lot of the analog in those devices is very high speed and must be put on, say, sapphire substrate or invokes some other complicated method, requiring wafer level integration or another package. The energy budget may be worse for digital at our current tech level but other effects like package size and package/wafer/process cost dominate.
Spintronics, phononics, photonics, ... could also lead to new computing and reduced sizes.
I often feels that when digital components are numerous enough, you fall back into statistical/analog thinking anyway (even if it's just a case of a few thousands of VMs respawning on need). No more discrete perfect information.. but flocks of potentials.
A 16 bit signal already represents an almost 100dB signal to noise ratio, which is extremely difficult to do operations on. I think 8 bits would be more realistic, but it would still suffer from the problems mentioned by the other reply.
My educated guess would be no - excepting specialist applications like super high frequency signal processing, or very high power levels.
If we look at trends, as computers got faster, more and more analog signal processing circuitry got replaced by digital equivalents, with some analog frontend that does downmixing and initial filtering as a frontend for the ADC.
For example, transistor-driven class A and AB amps that used to be analog amplifiers were replaced by class D amps - essentially a pulse-width modulated signal controlled by a switch-mode FET with a filter bank on the back end. This allowed the creation of very compact and power efficient audio equipment - like the portable Bluetooth speakers we have today.
Likewise, all other traditionally analog stuff, like power supplies, signal processing, motor control got a major boost from improving digital technology to the point I can't really imagine analog making a comeback if not for some niche reason.
Low power is another niche where analog can be best. Did you know that even current smartphone designs include electromechanical (really) filters for radio signals? SAW filters convert electrical oscillations to mechanical ones and back.
AFAIK hearing aids also rely on analog stages in some pretty clever ways to save energy.
By "warmer" sound, it is meant that, even if the tube amplifiers normally produce greater distortions than many of the more modern amplifiers, for some reason this kind of distortions sounds more pleasant than other kinds of distortions.
If some amplifiers would be ideal, producing no distortions, they could not be distinguished in any way, regardless what devices have been used to implement them. The modern Class-D amplifiers can behave almost ideally when amplifying low-frequency signals, but for the signals with frequencies close to the upper limit of the audio range their behavior becomes much less ideal.
While I have pleasant memories about a tube amplifier that I have used many years ago, I have never made comparative tests with other amplifiers, so I could not say whether it was better or worse regarding "listening fatigue".
Even when comparative tests are done in an identical setup, by swapping a tube amplifier with another amplifier, it is extremely difficult to disentangle which audio differences are caused by using tubes as the amplifying devices, which are caused by the audio transformers included in most tube amplifiers and which are caused by the interaction between the amplifier outputs and the loudspeakers, which can be very different between different kinds of amplifiers, due to their different relationship between output impedance and signal frequency.
And to cut a little more audiophile fuzziness out, I’d say I’d associate warmer with high cut and boosted upper mids. Which sounds like it would alleviate fatigue to some degree, perhaps be perceived as louder at a given spl.
Tangentially, I built a little transmission line speaker, see Woden Designs Baby Labs pdf, I built the redeye, IIRC, the one that used the cheapest driver. The little 3 inch driver really rolls off above 10k, I’m missing a whole octave of the highest frequencies, and I’ve spent some time around planers and table saws and electric guitars, don’t have the best ears, but side by side with other speakers you notice. But anyway, the point is you notice, but it really doesn’t sound bad to lose a little high end. Something is missing, but it’s not ugly to listen to in the way some other speaker defects are.
I hate to even bring more audiophilespeak into the thread, but I have heard the distortion (harmonics) are "even" in tube amplifier architectures, "odd" in solid-state. (I don't even know what that means though, or in any event why that would be.)
Analog still has not left power supplies. The dynamics are just too fast for low-power digital. So the solution for a long time has been mixed analog and digital.
At higher power, there's usually a hierarchical control scheme with an all-digital supervisor at the top. Analog isn't required at that level because the dynamics are slower, and digital has the extra benefit of more control options that analog can't do.
As the dynamics get faster, there's an even greater need for analog. You start needing increasingly expensive op amps. But op amps trade speed for convenience. So the fix is even more analog, but the integrated type. At that point, you might as well throw in more digital features.
Which is why I think we'll just see even more mixed-signal options at the lowest levels. Different ways of leveraging analog speed and low power under an increasingly sophisticated digital manager.
If you look at any modern voltage regulator it switches in the megahertz range with variable pulse width and interval. This is all controlled in the digital domain (lower end ones do use analog control loops, but higher end do not).
Above the megahertz level switcher everything else is just capacitors providing low impedance at frequencies into the high hundreds of megahertz (and at gigahertz it’s on-die capacitance).
‘too fast for digital’ makes no sense when talking about control loops, digital is faster.
Not really what the original article was talking about, but sure :)
I'm reminded of Behringer building entire factories on the back of a line of reissues of other people's analog synths. They worked out how to build that stuff in modern workflows without compromising the analog signal paths: I've taken apart their newer gear and saw how it was done. All you have to do is not build it like it's a cellphone, and give the audio paths quite a lot of extra signal handling capacity plus some fairly modern circuit board design knowledge.
I've got a couple MakeNoise synths that are likewise very modern designs but stand up to anything anyone's ever done, vintage or not. Designing analog circuitry is definitely in a renaissance.
Yes, the new analogue music stuff being produced now is really amazing. For instance, the tracking (ability to produce the correct pitch) on some of the newer eurorack oscillators is amazing across many octaves.
Also, everything analogue feels so snappy, compared to doing stuff in “the box”. On a computer, the audio must be buffered whenever transferred between components, and this really adds up to noticeable latency when doing real time stuff.
Of course, there is stuff which is not feasible to do in analogue. Spring reverbs are cool, but it is far from digital reverbs like the Valhalla shimmer.
(Speaking of telephones, I really miss the times when the phones did not have all this latency. It feels like speaking in a walkie-talkie some times.)
> Speaking of telephones, I really miss the times when the phones did not have all this latency. It feels like speaking in a walkie-talkie some times.
Amen to that. The number of times that I've gotten into 'we both start talking at the same time' deadlocks on the phone (and video conferencing, for that matter) in the last couple of years has reached high levels of irritation.
Get a USB audio interface, increase sample rate, and decrease buffer size in the device settings. This has completely eliminated VoIP latency on my system as long as I’m calling people that use Ethernet instead of Wi-Fi.
On Linux you’ll also have to mess with pulseaudio settings, but that’s all you need on Windows.
In theory a motherboard mic input could offer the same settings, but they (correctly) assume most users have a shitty CPU that can’t handle real time tasks without lag spikes that lead to buffer underruns.
Unless you only work with other software developers (or gamers I guess) who are latency sensitive like us, most of the people you speak with over VoIP are unlikely to have a Ethernet connection.
Last time I worked full-time in an office, some people didn't even know you could connect to the internet/router via a cable instead of wireless...
If you're talking with them over VoIP, my hunch is that money isn't the problem here (30m ethernet cables cost like ~$15-30), but rather being able to pull a cable through the house/apartment without making it look like an eyesore or even being in the way.
I don't think I'll be able to get my customers and target companies to adapt, I just have to use whatever they bring to the table. I guarantee that the latency isn't on my end (128 byte buffer).
PipeWire can supply even lower latencies than PulseAudio (approaching jackd or perhaps matching it if mic and speakers run at the same clock and don't need resampling), but the only way to fully take advantage of PipeWire offering "zero latency routing and no mixing latency beyond hardware buffers" is to rewrite apps from the PulseAudio API to JACK or native PipeWire, which most browsers (and presumably VoIP software) haven't done.
I run Jack, but if not that I'd go for Pipewire for sure, pulse is only good for quick and dirty stuff. There is a pulseaudio sink for Jack that allows for some interop but of course you will end up with the latencies stacked.
>On a computer, the audio must be buffered whenever transferred between components
Not true. Any half-decent audio API will drive the whole audio graph from the hardware I/O callback, so there’s only a single buffer, usually of length 256-512 samples.
How does that work? Sure, the buffers are in the hardware, but there still needs to be buffering between each pair of components? Say between two different effect plugins.
Let’s say you have an audio graph like this in your DAW:
synth → reverb → output
Every 5.33ms the OS audio system (CoreAudio, JACK, etc.) calls your DAW to produce 256 samples of audio. The DAW looks at the audio graph and asks the plugin connected to the output, the reverb, to produce 256 samples of audio. The reverb calls the next plugin, the synth, to produce 256 samples of audio, then applies the reverb effect on it, then returns it to the DAW, which returns it to the OS, which writes it to the audio hardware’s buffer. This all happens within the same 5.33ms callback.
You can have as many plugins as you like, there’s no additional latency (unless the plugin explicitly adds buffering).
Ah, I see. So, within a given process (such as a DAW) you can bundle stuff together, as long as the combination of everything is fast enough to make the deadline. But different processes communicating over, say, JACK, still have buffering between them.
Sadly, most audio APIs are not "half-decent" at zero-latency inter-app routing and minimal mixing/output latency (for a particularly bad example, see https://superpowered.com/androidaudiopathlatency). JACK on Linux supported this but few apps supported it, ALSA and PulseAudio loopback devices don't have it to my knowledge, and PipeWire is the first chance at bringing mainstream non-musician apps to this promised land (but only JACK and PipeWire apps, and Pulse and ALSA apps still have added latency).
Additionally, conventional hardware input/output requires at least two periods of buffering for a stable duplex session (mic input and speaker playback simultaneously). On startup, you fill the output buffer with 2 periods of silence. Every period (when input buffer grows and output buffer shrinks by 1 period), you take 1 period of input audio, feed it through the audio graph (which can take up to 1 period otherwise your speakers run out of audio to play and stutter), then write 1 period of output audio to the speakers. Smaller periods (lower sample/frame count, lower duration in ms) reduce latency, but increase the risk of stutter if your system can't schedule and run all audio apps in 1 period of time.
Yeah, Android and realtime audio shouldn't occur in the same sentence, but that's due to their business model ¯\_(ツ)_/¯
I was explicitly mentioning JACK and CoreAudio, because the discussion was about professional audio tools.
>Additionally, conventional hardware input/output requires at least two periods of buffering for a stable duplex session
This is not true if both the input and output are coming from the same device. In that case you can serve both input and output within the same IO callback, thus the playthrough latency will be only a single period. Multi-device playthrough is more complicated, as you'd need asynchronous sample-rate conversion due to different hardware clocks (i.e. clock drift), which inherently increases latency.
> In that case you can serve both input and output within the same IO callback, thus the playthrough latency will be only a single period.
By playthrough latency, do you mean having the output play the same audio being captured from the input/mic? One period of loopback/playthrough latency is only possible if you can instantaneously copy data from the input to output buffers (for example by using the same hardware buffer for both?). In practice, PipeWire and JACK2 both introduce 2 periods (in PipeWire, quantums) of buffering latency, which I've discussed in my blog (https://nyanpasu64.gitlab.io/blog/low-latency-audio-output-d...), or you can confirm by reading their source code.
Hmmmm… actually you’re right! I was thinking that there’s going to be a single I/O interval between the end of the input buffer and the start of the output buffer, but now I realize that this means 2 x I/O interval latency between the input and output streams.
When people talk about audio buffers they’re talking about something that’s filled and drained from RAM when an interrupt fires every 1ms, not the data that’s processed every few nanoseconds as the CPU moves on from running one filter/plugin to the next. The latter doesn’t even leave the CPU cache.
This used to be the only truth, and is definitely a fact for purists or vintage collectors. But especially for working musicians and non-collecting producers the tide is turning: they want to have their cake and eat too. Which means new synths that can be both analog and modern.
There is also a lot of cost cutting involved. They nailed some models though, but others not so much, but still a good approximation of what the original units were.
If anything buying Behringer gear got me thirsty for the real deal.
+1.
There's no way one can build a fully digital audio system; the signal is analog when it is produced and is going to return analog before reaching our ears.
Also I don't see analog/digital as mutually exclusive; they're two worlds that very often work together benefiting from the technology of their time; digital chips displaced very soon analog ones in calculators because we didn't and still don't have the technology for example to build ideal or very close to ideal opamps and other parts, but should we one day reach that point, it would be a game changer: as an example, the number of transistors needed for calculating a square root in a couple opamps would then be two orders of magnitude smaller than the simplest processor, not to mention no digital quantization, no clock needed, virtually no delay and way less power consumption.
I'm not saying that we could build a analog computer running Doom, in all honesty I can't even imagine how it could do that without involving something digital, but maybe swapping more elements of a digital system with analog ones might be doable one day to get the best of both worlds.
Remember why ECL lost the battle for the fastest semiconductor process family, though. ECL was several times faster than CMOS for many years, but CMOS's inherent scalability meant that it was profitable to throw hundreds of billions of dollars at improving it. So there was only a narrow window in time when ECL designers like Cray had the upper hand.
Once ECL lost the high ground in performance, it was gone forever. You won't encounter it outside of computer museums today, except in very specialized applications. Analog computing is like that, only even more doomed. It doesn't scale.
I do think we'll get back to analog eventually, for the same reason our brains aren't digital. But that will look like alien technology compared to what we have now, so there's no near-term incentive to explore it. Right idea, wrong timeframe, probably by a hundred years or more.
Agreed. If anything, it was in the 90s and early 2000s that almost everyone moved to digital.
I remember several keyboardist, guitarist and engineer friends selling all their analog gear for some stuff that is completely obsolete and abandoned by players now.
The analog stuff on the other hand sells like hotcakes. Even stuff from the 90s fetches crazy prices.
And with the renaissance, people are also buying newly designed analog stuff at an amazing pace.
No the producers don't just recompress and resample between every track of the mixer, that would be foolish both quality and performance-wise. The source materials are, if even needed, decoded only once and inside the production software it's always raw sample data.
I am speaking generally, it is difficult to be in full control of the signal chain as it is often up to the software and operating systems. Even as someone who just plays music, getting a "bit perfect" feed for my DAC is highly non-trivial.
A DAC is a sampler that sends digital audio to your computer. ASIO bypasses the operating system audio driver. The signal chain is that simple. I'm not sure what you mean.
I know the difference, I'm an engineer. I assumed you were talking about recording since quite obviously you can get a bit-perfect signal from the digital side. It's maybe not so obvious going the other way. There is a persistent myth that digital audio is inferior in some way. The signal chain is not doing any processing. Quantization noise is not audible and can be removed using oversampling. On the other hand analog recording introduces flutter, distortion, and noise that cannot be easily removed.
Parent poster clearly means ADC. The rest of the post is absolutely true.
> No, the software signal chain is not simple.
While the software itself might be complex, the grandparent is right: digital audio data coming out of the ADC doesn’t get transformed in the way you suggested above. There’s no secret compression, transcoding or resampling going on without the knowledge of engineers, as it would add noise and interfere with production techniques. Even things like dithering have to be added separately.
Again, I was speaking generally about digital audio, the average consumer has no idea what happens when using audio playback software, USB DACs, AirPlay, Chromecast etc. Even a few of the downloads I have purchased have been resampled (more fraud than secret in that case).
Alright. That's totally fair and true, but that's not within the context of this thread, the first post specifically said "music production wise" and this is what all the corrections refer to.
Consumer audio, even (or especially!) when things are 10x more expensive than pro products, is definitely a bit of a garbage fire.
Gonna agree with the other replies. That’s definitely not the case for music production. We even use special drivers on some OSs, bypassing the slow paths, and we’re very particular about gain structuring, signal path and everything around our ADCs/DACs, so there’s not much mystery for us there. We’re in control and it’s as perfect as it gets.
Isn't the main hindrance the cost and speed of development: You can buy an off the shelf digital FPGA/CPU chip/board and get one of a thousand software engineers to start building a prototype tomorrow.
An analogue solution is a much costlier and time consuming solution.
The area where I saw analog computers that made sense is in CCTV cameras. Installing an analog board into a camera to identify shapes is cheaper to be done on camera vs sending all video for processing.
Think manufacturing QA, retail foot traffic, or similar applications.
Even if you get that part right you still have to pick a number of discrete steps that isn’t annoying but offers a good enough granularity. Good luck making everybody happy with that choice.
Yeah, fidelity will be just right with a knob, barring incompetence. Not to mention how much easier it is to implement on a hardware and software level, just read the value from the knob.
Even with a receiver with a volume knob denominated in decibels-- there's 30 useful steps between really quiet and really loud for content that is using the full volume of the track. Content that is quiet, or quiet in places, can easily need another 15 steps.
And sometimes, sitting in the room with the atten set to -34 dB and -32dB feels really different.
So that says you need 22+ steps to be sufficiently expressive, but everyone wants a 1-10 scale.
I would expect Bose to get at least that right, but you never know these days. The Bose headsets in my house work flawlessly. I can imagine a monitor company getting it wrong.
On my Windows PC, with the SB ZXR soundcard, for my ATH headphones I want my level no higher than 8-10% ; the lack of steps is a major pain, must be compensated for by in app volume controls.
On my Samsung Android, at least I have their Sound Assistant which makes my Galaxy S21U a lot more useful. There I can greatly increase the number of steps, and the useful range is actually 0-100%. (Also, I can play eg Audible and Spotify at the same time, with individual volume controls).
But it will never come close to the fidelity of my 25 year old Rotel pre amp volume knob. May this system never die, or at least outlive myself.
It's quite expensive to learn this trade and takes a looot of time. If you think programming is hard, then designing analogue circuits is on another level.
Yet, corporations pay EE substantially less (at least in my country), so there is not many people interested in getting into it.
For 40 years I've been thinking of this question as technical, in
terms of signals and noise, error correction, theoretical limits of
precision and so on.
Now I think those are the wrong questions.
Digital versus analogue, if we approach it as a "conflict" is
political, connected to human rights and values around experience,
ownership, reproducibility and uniqueness, time, space and reality, and
the ephemeral versus the eternal.
What would the world be like if we could re-run history and develop
analogue technologies on a capability par with our digital ones? Is it
unimaginable that picometer scale manufacture, DNA bio-computing and
quantum technologies might evolve out of a totally different
trajectory than Boolean logic gates, von Neumann/Harvard architectures
and packet networks?
In such a world, would we still have conceits like Digital Rights
Management or massive centralisation into single points of power and
potential failure?
Would we feel as alienated by/from our technology and need an expert
guardian class to manage the masses? Or would powerful analogue
computers somehow fee more natural to our analogue brains? More
understandable and maintainable by the average person? Would they be
mass-producible or remain powerful tools for the few?
Would we still rush around like headless chickens, worrying about
nanoseconds of efficiency and trying to connect everything to
everything else just because we can?
Perhaps we would have entirely different problems such as virus-like
nanobots that reprogram people or computers that forget important
things. Maybe we would have already achieved "artificial intelligence".
A big downside to this on-package combination is the inevitable degradation and destruction of some I/O. Take optics for example, if your SFP transceiver has a bad source or detector, you can replace it and keep going. If your on-package optics chiplet dies, you're simply stuck with no optics and either have to replace the entire package or maybe even the entire device.
I know nothing about analog computing. Is it possible to “program” an analog computer as one does with a digital computer? For example, can a compiler be made so that an recognizable programming language can be used to control an analog computer? Or is the “programming” done at the hardware configuration level?
> Is it possible to “program” an analog computer as one does with a digital computer?
Yes. You design a circuit; you're program is a diagram. Here is a simple one[1]. Believe it or not when you get good you can suss the basic function of such a thing pretty fast, much like a decent programmer can grasp the structure of code they've just encountered. There are a lot of idioms to learn, and you need to be able to approximate logarithmic functions in your head.
The 'op' part of op-amp (those big triangles) means 'operation,' as in mathematical operation. Their output is some mathematical function of their inputs (the + - terminals). That circuit multiplies two numbers and divides that product by a third number.
It should be noted that the op-amps don't perform the multiplication directly, they can only do scaled/weighted addition or subtraction.
It should also be noted that this circuit can only operate on positive signals. A full four quadrant multiplier circuit, where both inputs can be positive or negative, is more complex AFAIK.
Analog computers simply are computers that use a sliding scale between whatever counts as zero and one. You define relationships between inputs and outputs and just like with digital computers you have to deal with underflow and overflow. If it works it works well enough, but the big killer is noise. In digital you have this step twice: quantization on input and the reverse on output leading to everything being off by a little. But in the analog domain the noise increases with every step and there is no way to correct for it. So if at any point in time you are dealing with very small values the output of your analog computer will trend to become more and more noisy, until the output is unusable.
This is why digital computers won out over analog computers: they are deterministic, whereas analog computers would like to be and sometimes are deterministic but the nature of the device works against you all the time.
The kind of abstractions that we can achieve in the digital domain would be impossible in the analog domain.
The noise problem affects digital processing as much as analog processing, because every arithmetic operation, e.g. addition or multiplication, adds a rounding noise.
The digital noise has the advantage that it normally does not depend on time or temperature, i.e. 5 years after you have bought it, unless it has broken meanwhile, the digital device will produce the same noise as in the first day (when executing the same program). Also, a properly designed digital device will produce the same noise regardless if the weather is warm or cold.
The greatest difficulty in analog processing is to avoid changes in behavior caused by the variable ambient temperature or by aging.
Besides the immunity to temperature and time effects (within reasonable ranges), the second advantage of digital processing is that it is much easier to implement programmable devices, where, e.g. if the noise is too high, one can just change the algorithm to use numbers with more digits. That will work fine, even if it may increase the power consumption or reduce the signal bandwidth (due to slower digital operations).
On the other hand if the SNR is not good enough in an analog computer, increasing it may require a complete redesign.
Analog computers are - in all cases that I'm familiar with - clockless.
With respect to Sutherland, I think you're talking about 'fleet', which was a digital async solution, HN discussion from back in the day (13 years ago, all resources seem to have vanished):
It is possible to do analog processing of signals that have been sampled in time, using a clock signal, but which have not been quantized in amplitude.
Some circuits of this kind are widespread, e.g. the so-called switched-capacitor filters or the so-called charge-coupled devices (CCD), which are used in many image sensors.
It is possible to make an analog computer that works like a synchronous digital computer, with all operations done under the control of a central clock signal.
In such an analog computer, the digital registers of a digital computer are replaced by capacitors which store electrical charges that are the analog quantities corresponding to the numbers stored in the digital registers of the equivalent digital computer.
Such a synchronous analog computer would be slower than an analog computer that works in continuous time, without a clock, but the synchronous analog computer can be made to have a higher precision, because its precision can be made to depend only on area ratios between capacitors, and such ratios can be very stable during aging or temperature variations.
In an analog computer with continuous time, without a clock, the precision is also influenced by other kinds of electrical components, at least by resistors, and in integrated circuits it is difficult to make high-quality resistors.
Traditionally analog computer programming has been done through manual hardware reconfiguration, like manually plugging in wires. It should be possible to build a programmable hybrid digital-analog computer with the digital component executing the program and essentially treating the analog components as peripherals. So the connections between analog components could be dynamically changed. This would require a completely different programming paradigm.
Some people think that early applications of quantum computing will be in the analog style, e.g., Quantum Analogue Computing <https://arxiv.org/abs/1001.2215>.
Analog is certainly trying to make a comeback, in the form of quantum computing. People often focus on the qubits, which are beautifully quantized and therefore "digital" by their very nature, but the story is more complex. Behind every quantum computer is a bank of DACs and ADCs and AWGs to drive the thing. The unsung heroes of quantum computing devote their lives to calibration: tuning up those analog devices to gain access to the brag-worthy qubits.
Funny, I was thinking quantum computers are actually replaying the transition from analog to digital. The qubits you get from pure analog control are too noisy (1e-3 gate failure rate). You need error correction to get to good error rates (like 1e-12). Error corrected operations are digital: they come from a small finite supported set, and you achieve operations outside of that set by composition. For example, operations with continuous control (like arbitrary rotations) would be decomposed into a series of axis-aligned 45 degree rotations. Similar to how an analog voltage addition gets digitized into a series of FULLADDER/HALFADDER blocks.
From a limited understanding of quantum computing mostly inferred from physical chemistry, I consider it analog in the sense that the "meat" of the computation is performed by a black-box physical system under appropriate constraints. Kind of like the trajectory of a thrown rock can be computed by throwing a rock and recording its path.
Quantum computers don't analyze physical systems by reproducing them 1:1 in a different substrate and evolving them. They can be used that way, but it's inefficient. For example, you might do chemistry computations in first quantization, where you keep a list of electron locations (each location being a triplet of 2s complement integers, under superposition). You operate on this representation using adders and table lookups and other not-at-all-analog approaches.
For example, [1] is about preparing first quantized fermionic states more efficiently. It uses sorting networks to do it. Nature doesn't prepare these states using sorting networks, but that's no reason not to do it in the quantum computer to reduce costs.
Possibly for AI inference applications in low cost edge devices (e.g. security cameras). I don't think it will in general though because you completely give up determinacy and repeatability.
Whilst analoge has some area's in which is shines, the real technology that I'd love to be leaned upon more is synchronized circuits/CPU designs. Would enable all the clock frequencies and save so much power utilisation as no need to have step clock frequency logic to bodge that workaround with asynchronized circuits as we have in our CPU's today.
Alas design tooling and skillsets hold this back, way easier to do an async design as well. FPGA's has an era (to some degree they still do) in which tooling held them back and whilst that has improved, highlights how software and hardware become intwined.
Which leads back to analog designs for problems, in which many cases the hardware is the software, so the whole time to develop is not just getting the hardware out the door, it is nailing the software per say as well at the same time. Then you are looking at hardware akin to an ASIC in which it is dedicated to the task it does and nothing else.
One thing though - the term BUG originated from the analog era in which an insect(aka bug) would become logged in the mechanism and cause errors, so a bug in a cog would change timmings or jam things up being one example.
i.e. If some circuitry is interfacing with an analog counterpart where 16-bit precision is enough at the digital side and the analog circuit can produce outputs matching that accuracy much more efficiently than digital while still being able to be adjusted externally by the digital, why not?
I think it can open up some new doors in AI, though what excites me is that what else other than AI could it be applied to? Circuit simulations? Signal generation? Perhaps some analog "programming languages"?