I wanted to check first, and I wasn't saddened when I found netbsd was also available for riscv64. That being said, having a full debian suite to nicely land on is what will gain it adoption. It's a chicken and egg problem where one side is vigorously designing and publishing a hardware spec in search of an operating system and at the same time an entire eco-system of open source development is in search of open hardware to run on.
Fully open systems are an entirely ideological win at this point with a now sub-$50 entry point setups available today. Please also do not get me wrong, working around copy protection and binary blobs serve as amazing learning tools for tinkerers, but not for true beginners.
Some people learn from hands on breaking it and putting it back together, but others learn by knowing that there is a fully documented dearth of information within a mere quandary of any part of the system.
These two worlds are not antagonistic, but in fact entirely complementary to each other. To fully understand and master a system, you have to break it and know how to fix it. While on the same hand knowing how to see what part broke and see an exact way to fix it in the completely free entirety of information available until time itself forgets it, gives you the confidence that there isn't anything you don't know or knowing instantly where to find it if you don't.
That free and open standard will lend itself to future chip designs, broader software options for those designs, and start to make computers adopt there new role as segregated fully autonomous systems loosely networked together into the same status that all 99th percentile that all previous technology has adopted throughout history.
The nerds won, we made computers into a skilled labor.
Only benefit is companies not having to pay ARM. The architecture NEVER WAS A PROBLEM for open, it was always the binary blobs required to run IP blocks put on the die with the cores that stopped people from using them.
ISA is the easy part. Even if you need to RE it, you only need to do it once.
Now hopefully with RISC V companies will be even more inclined to just upstream their drivers from the start (and not do the abominations that happen way too often on ARM side), but the ISA never was a problem for that.
> it was always the binary blobs required to run IP blocks put on the die with the cores that stopped people from using them
That's not going to change with RISC-V. If you've got silicon that has to conform with FCC regulations it's going to have non-modifiable firmware. No matter how open the rest of the system is, your WiFi and Bluetooth (and any other radios) will have closed firmware.
Probably a better example than WiFi would be the on-chip SDRAM controller. It's always somebody else's IP and there's a blob in the boot firmware that's just binary register settings. Like so:
The why is simple: Power/frequency limits allow everybody to be able to use personal consumer devices without licensing and without preventing other people from using their devices. If you modified your WiFi to blast out a 10W signal across multiple channels at your house, you completely ruin my ability to use Wifi at my house next door. Radio spectrum is shared by everybody.
As for reading there's CFR title 47 [0]. Parts 15 and 18 are germane for unlicensed radios and electronic devices. Parts 22 and 24 cover cellular devices.
The regulations don't explicitly say anything about firmware but to build devices that follow the regulations end user modifiable firmware is an implicit restriction. Even user serviceable antennas are restricted because radio device licenses cover not just the electrical output but total gain of the shipped antenna.
> The why is simple: Power/frequency limits allow everybody to be able to use personal consumer devices without licensing and without preventing other people from using their devices.
That explains why there are power/frequency limits, not why the device manufacturer should be deputized into responsibility for a sophisticated user's non-compliant device modifications.
Anyone can make an arc gap transmitter for morse code out of $3 in bits from any hardware store that will interfere with anybody else's radio devices in the vicinity. Anyone can buy ham radio equipment or built it from parts and do all kinds of non-compliant things with it. Then the FCC comes after you, not the hardware store or the device OEM.
Or more likely in the case of a WiFi device, comes after the person distributing custom firmware that purposely exposes a simple knob to allow unsophisticated users to exceed regulatory limits.
And if DD-WRT did that, they should expect a visit from The Government. But what should that have anything to do with Linksys or Netgear?
There aren't enough end users who know how to modify the firmware code themselves to matter, even if you make that "easy."
> There aren't enough end users who know how to modify the firmware code themselves to matter, even if you make that "easy."
This is just a silly statement. Go to an apartment complex sometime and browse the available WiFi networks. You'll see a huge number of them because everyone has the output power on their router set to the max value. There's plenty of places the 2.4GHz band is simply unusable because the noise floor is so high from a hundred base stations blasting out at full power.
If WifiBoost.exe could in tease that output power enough people would do it that WiFi or Bluetooth in some places would be completely unusable.
Modern radio basebands are largely software defined. The modulation/keying, power output, and transmitted bands are all defined in software. In order to sell that silicon as a Part 15 compliant device to end users the firmware needs to be locked. It's the digital equivalent of a fixed function radio. A manufacturer of a fixed function radio couldn't get a Part 15 license if it had a potentiometer on the back allowing you to dial up the output power, even if that potentiometer was locked under the case most people wouldn't open.
With an SDR the hardware plus software is considered the "device" for licensing purposes. If it supported unlocked or modifiable firmware it couldn't be easily/at all sold as a Part 15 device. It would be a different class of device and would require the end user to have a license to operate it.
> You'll see a huge number of them because everyone has the output power on their router set to the max value.
None of these people are firmware developers. They used the setting that existed from the factory in their router or in something they downloaded from the internet, and in the vast majority of cases it wasn't even the second one.
> If WifiBoost.exe could in tease that output power enough people would do it that WiFi or Bluetooth in some places would be completely unusable.
Then WifiBoost.exe would be illegal and the developers would be subject to penalties.
It would also be ineffective, because you can't remove interference by increasing power. The "highest allowable power" setting is often the default because it gives the best range, and the purpose of allowing it to be set at all is so that the user can reduce interference at the expense of range by lowering the power level so fewer devices are overlapping.
It's hard to get people to avoid doing illegal things when breaking the law benefits them. It's not that hard when breaking the law is pointless and maladaptive regardless of whether or not they get caught.
It's notable that there have existed routers with completely open firmware and there is no epidemic of ordinary users installing shady firmware on them to violate regulatory limits. There is likewise full software defined radio hardware on the market, for which a license is required to transmit but not to buy it. Applying a different standard to consumer hardware which is available to exactly the same set of people makes no sense.
> With an SDR the hardware plus software is considered the "device" for licensing purposes. If it supported unlocked or modifiable firmware it couldn't be easily/at all sold as a Part 15 device. It would be a different class of device and would require the end user to have a license to operate it.
I'm not arguing about what the law currently is but rather about what it ought to be.
But I also think your analogy is flawed. If the manufacturer sells a device with a potentiometer whose maximum setting was within the regulatory range, or with a fixed power output, and then the user modifies the device to install one that can increase the power output, that should be on the user. So why is it different if the user modifies the device to install firmware that increases the power output?
You're essentially arguing not that the device can't include a potentiometer, but that the case has to be sealed so the user can't install one.
Which in turn prevents the user or any other third party from repairing the device or supporting it past when the vendor stops caring about it, inducing widespread security vulnerabilities as users commonly continue to use operational devices even after the hardware vendor stops issuing updates.
> You're essentially arguing not that the device can't include a potentiometer, but that the case has to be sealed so the user can't install one.
No I'm literally saying a company can't literally include a potentiometer that boosts power above the licensed levels. It doesn't matter if the case is sealed or screwed shut. The design will not get a Part 15 license and can't be sold. User modifiable firmware is the same thing as the power boosting potentiometer. If the user can boost the power the device is no longer a Part 15 device and needs a different license for sale and the user will require a license of their own to operate it.
It doesn't matter what you think the law should be that's what the law is. The FCC as a regulatory body doesn't care about FOSS, they care about keeping an easy to over exploit shared resource continue to have significant and extremely profitable use. If licensing keeps the system working but makes FOSS inconvenient they're going to err on the side of a working system. The FCC is far from perfect and title 47 regulations are far from perfect. But they don't exist for no reason and didn't just appear overnight to inconvenience FOSS enthusiasts.
> User modifiable firmware is the same thing as the power boosting potentiometer.
It's a third party modification to the device. Why is it any different than any other modification to the device that could make it non-compliant? Why does the OEM have to prevent the user from modifying the device in this way -- notably by also preventing them from modifying it in many ways that aren't a compliance issue, which is by far the more common reason to modify the firmware -- but not prevent them from modifying it with a screw driver or a soldering iron?
> It doesn't matter what you think the law should be
It matters what people think the law should be because when regulators set policy that causes widespread security vulnerabilities in common consumer devices, we get to apply pressure to them using every means at our disposal until they do better.
> The FCC as a regulatory body doesn't care about FOSS
Regulatory bodies have to care about anyone they negatively impact who can marshal enough support to make them feel pressure. "Open source" is a lot of people, and a lot of major companies. Right to repair has seen some significant legislative support.
> they care about keeping an easy to over exploit shared resource continue to have significant and extremely profitable use.
Which user-modifiable firmware is no more a threat to than any other modification the user might make to the hardware, or for that matter the widespread availability of hardware you're intended to need a license to operate even though anybody who doesn't care about breaking the law can still buy it and use it.
> But they don't exist for no reason and didn't just appear overnight to inconvenience FOSS enthusiasts.
I don't think they exist for no reason. I think the OEMs like rules like that because they can point to it as the fig leaf for not allowing users to continue to maintain their own devices after the OEM stops supporting them, or add software features they would otherwise reserve to a more expensive model, because the OEMs would prefer that you buy a new one instead, or pay more. And that kind of regulatory capture causes me to advocate for its removal.
It only has power level control up to the limits for ISM devices on its operating frequencies. Its license is only valid for antennas with specific gain.
You technically can overwrite its firmware to turn the radio into some general purpose software defined radio transmitter but that's not how the device is licensed and sold.
Wait, though. Doesn't that seem to make this point moot? If they're not responsible for what users do hacking around, then there's not really any need to keep firmware closed source or locked down.
To be fair, I think at least at one point, that is in fact how it worked. I recall having a wrt54g and being able to run it out-of-spec using DD-WRT.
> Doesn't that seem to make this point moot? If they're not responsible for what users do hacking around, then there's not really any need to keep firmware closed source or locked down.
To sell the device the manufacturer has to make sure the device follows regulations and normal usage will follow end user regulations. The easier they make it for regulation-violating user modifications the more likely the product is to not get a license for sale of the device.
In something where the radio baseband is a physical circuit layout conformance is easy. Without modifying surface mount parts that circuit will always conform to its license.
For complicated basebands (WiFi etc) with software control having a fixed/closed firmware is the easiest way to get a license and not get it pulled after the fact.
The WRT54G is a fluke because it ended up using Linux/GPL code in the firmware. It was then obliged to release the source because of that. You can buy devices that allow firmware development and modification but not at retail and you can't offer them for sale without an FCC license covering them.
Okay, but you could release the source code for the firmware, and only accept a signed binary blob. This would still be open source (GPL v2 compliant, but not v3 AFAIK) and auditable, but it still be conforming and certifiable.
And some manufacturers do just that. Many do not because they're sub-licensing components/code from other companies. Others don't because they want to license their hardware to others and firmware source is a special sauce when the functions of the radio are all in software.
A device SKU only meant to be sold in ITU region 2 or North America specifically will often have the available bands/channels locked in the firmware despite the capability of the underlying hardware. Same for a SKU meant for other ITU regions or countries. It's far less common to have user selectable region settings for things like WiFi channels. It used to be more common but regional SKUs are just easier for everyone involved.
Even if a device enables you to specify channels not allowed in your country/region doesn't mean you're legally in the clear. "But there's a switch" isn't a defense against an FCC fine. It's unlikely you'll get caught and fined for using channels 12-14 on a Wifi device in the US but if you interfere with something important you're breaking several laws and if caught can potentially face jail time.
Where does the FCC specify the delivery format of firmware? Or what "firmware" is as a concept in general?
Non-modifiable - by whom and to what degree? One-time-programmable memory? Accepting only signed updates from the vendor? Where are these things specified in enumerated FCC regulations?
Completely moot point. They are not selling a device. They are selling chip. It's on the device manufacturer to prevent it and it doesn't need to be in baseband chip.
For Part 15 devices the licensed portion of the device is the software and hardware that does the RF emission. Firmware defining operation is seen by the FCC (and other regulatory bodies in other countries) as the same thing as a circuit design of a fixed radio transmitter. To the point of firmware updates can require recertification just like a change in the BOM or circuit design would.
A RISC-V radio baseband will end up no more open than a Cortex-M one. The CPU core portion might be documented/open but not the full firmware. It'll still be effectively a black box no matter how open the specs on the processor.
No snarkiness intended, but I just read your post for the 3rd time and although I understand each sentence individually, I'm still not sure what point you're trying to make.
> there is a fully documented dearth of information
... is nonsensical.
> a mere quandary of any part of the system
More nonsense.
> in the completely free entirety of information available until time itself forgets it
... wut?
> adopt there new role
Benefit of the doubt: typo
Doesn't make sense even fixed, though.
> segregated fully autonomous systems loosely networked together
... wut?
> the same status that all 99th percentile that all previous technology
What?
> The nerds won, we made computers into a skilled labor.
Always was.
This is word salad. If you scan very hastily and you're not good at skimming -- and writing online for 10s to 100s of thousands of readers a day has taught me that many people can't skim to save their lives but they don't know that they can't -- it may look superficially like coherent English text, but it isn't.
tbh, on re-read I agree it's awkward, but to take some of the bait:
>> an entirely ideological win
> What does that mean?
ideological win makes sense. what's an entirely ideological win? one without non-ideological impact.
>> there is a fully documented dearth of information
> ... is nonsensical.
disagree. it is entirely sensical for a dearth of information about a well-defined topic to be documented. see: every github issue for project documentation ever.
Well, OK then. @superdug keeps posting and responding and seems to be acting like a human, so maybe it's just me.
There is a chap in the Hitchhiker's Guide fan club who I've known online for >20Y who is a native English speaker, but his comments and posts online read almost like bot-generated text. Most of them, I have to ask him to explain, often 3 or 4 times, until he can produce something coherent to other humans. His grammar and spelling are perfectly fine but his mind works weirdly and he writes about references to passing thoughts that occurred to him and is not able to recognise that others do not share those random associations. Some of my friends have got so irritated with it, they've just blocked him.
Perhaps this is a case a little like that? It makes almost no sense to me, but apparently, it does to you, so maybe the failing is on my end.
Simply someone getting a break to philosophize about a crowning achievement in my eyes as the entire idea of free and open had yet to even be fully defined let alone on the precipice of realization four decades ago when I signed on to this free information roller coaster that changed the world.
"Fully open systems are an entirely ideological win at this point with a now sub-$50 entry point setups available today."
It's not entirely ideological. The advantage an open, license-free, RISC-V brings to the table is ISA flexibility and freedom, the ability for people to bring innovation in terms of extensions, create their own flavours.
And where we'll hopefully see this innovation is in realms like AI and graphics and vector. e.g. bringing compute back in from the GPU, by chip producers bundling extensions into their custom RISC-V implementations and then providing the requisite add-ons for llvm, SDKs, etc.
We've already seen the excellent things Apple was able to do with ARM because they had the chip under their own control and were no longer beholden to ARM^H^H^HIntel (or Motorola/IBM before that). I am hopeful we'll see similar excellence around RISC-V. (EDIT: I had typo'd "ARM" here but meant to write Intel)
Though this will all come at the cost of some amount of fragmentation, at least the base instruction set is standardized, and a standard method for extension put in place.
> We've already seen the excellent things Apple was able to do with ARM because they had the chip under their own control and were no longer beholden to ARM (or Motorola/IBM before that).
Sorry? Apple remains an Arm licensee and everything they have done will be consistent with the terms of their license.
Oh, just noticed I'd written "with ARM" when I meant to write Intel when talking about Apple being beholden. Fixed.
My understanding is Apple has an architectural license, broader than what some other licensees have, and it permits them to make architectural changes that people without that license cannot. And while I don't know the current business terms of this, I would speculate that by them being part of the original joint venture that created ARM in the first place they have at least some remaining better pricing on that than others would?
In any case, correct me if I'm wrong on that... but there's also the fact that by controlling the whole software stack Apple is also able to initiate changes that would be difficult for any other hardware vendor.
FWIW I think it’s unlikely that being a founder has any impact on current terms - we’re now thirty years on and several new licenses on now. Being a huge high profile customer will have an impact so Apple can probably get better terms than if you or me tried to buy an architecture license!
Apple had to pay a lot for that license and even if you are willing to pay its not that easy to get. And if you do pay, you are not actually allowed resell the licenses for that chip.
RISC-V gives all people a level of power that is even higher then what Apple had with ARM.
As somebody from Esperanto said when making their Esperanto said it. If we had asked ARM it would have been 'pay of a couple million and then you can't do X,Y and Z'.
I really don't get this sort of take : the world's most valuable company and a startup backed with $100m+ shouldn't have to pay for the IP that they use? Will Esperanto be giving their IP away for free and letting other firms do what they want with it?
Sure RISC-V is great in many, many ways but having the 64-bit Arm ISA ready for Apple to use in (checks notes) 2013 has been fundamental to them building their multi-trillion dollar business. Getting that ISA ready cost real money.
They don't want actual ARM IP. They only want the ISA. Its questionable if ISA design should be protected.
To get an architectural license its millions, do you think its reasonable to pay millions just for an ISA specification?
And even with that specification, there would be lots of restrictions still.
Listen to Jim Keller on the topic, there are also other issues with how slow developments are. Lots of companies are already out with accelerators that don't even have an official ARM spec yet.
> Sure RISC-V is great in many, many ways but having the 64-bit Arm ISA ready for Apple to use in (checks notes) 2013 has been fundamental to them building their multi-trillion dollar business. Getting that ISA ready cost real money.
That's nice and all for ARM and Apple.
But now we are in the future where an open standard exists. Once you move to an open standard you don't go back very often.
I would look at JH7110, released earlier this year, used in VisionFive 2, Star64, as well as PineTab-V, and with an excellent official effort towards upstreaming software support[0].
I own a VisionFive 2 and can recommend it.
There's also the TH1520, a newer chip that's a bit faster and has pre-spec V extension. This is available in the Sipeed Lichee Pi4A, but the support for this chip is very green still.
CPU-wise, JH7110 performs between rpi3 and rpi4. TH1520 is faster than rpi4.
Else (GPU, I/O, cryptographic acceleration, hardware video codec acceleration and other peripherals), they are both significantly faster than rpi4.
TH1520 is definitely not faster than a RPi4, not for normal programs compiled with normal compilers or binaries that will come from your distribution. And no, Bruce, vendor benchmarkering doesn't count.
ADD: TH1520 is about twice as fast as JH7110, but as best I can tell, StarFive (JH7110) is more proactive about upstreaming support.
I have a starfive visionfive V2 [1] board at home to play with riscv. Can be had fairly quickly from Aliexpress for ~ 100 bucks.
Very similar in nature to a raspi, but still a bit rough around the edges:
- getting debian up an running is not yet straightforward
- this is not exactly a speed demon (it's actually dog slow compared to an rpi 4)
- some feature aren't in the kernel yet (eg some USB devices don't work)
- support for HDMI is still clunky
- [EDIT]: one very nice thing, it has an NVME port and you can choose (via dip switches) to boot from either of NVME, Flash, EMMC or UART (see page 46 of [2])
Mine boots from NVME and it's really sweet, it's one major mission feature for the rpi family.
BUT: it's a very usable computer, and great if you want to play with RISCV assembly on actual metal (outside of a qemu box, that is).
https://milkv.io/pioneer seems to be the most powerful rn, but I'd only recommend it for developers who are working on getting things like gpu drivers ported.
For most development smaller SBCs are enough, and I'd recommend waiting for RVA22/23 profile supporting hardware before spending a lot of money, because most current chips use the non standard vector isa and don't support all profile extensions, because they where created before everything was ratified.
Current chips largely implement what we retroactively call RVA20: RV64GC. This was ratified in 2019 and unmodified since 2017.
The next batch we are aware of (Ventana Veyron, Tenstorrent Ascalon, SiFive P470/P670/U84, XuanTie C908) implement RVA22+V. The extensions included were largely ratified in December 2021.
Do you know if Ventana Veyron and Tenstorrent Ascalon will be available to regular consumers? I mentally lumped them in with andes from which I haven't seen any vector cores available anywhere, even though they "released" them a long time ago.
I think that the TH1520 in the Beagle-V and Lichee Pi 4A only has support for 5.10 kernel (curently) at least for access to most of the features of the SoC. If you do not mind loosing access to most features of the SoC, then you can run any kernel. An example what you would loose access to with any other kernel would be GUI support, and the ability for the VPU to encode/decode video. But there are many many more (For now, hopefully they will upstream support).
Plug "site:lore.kernel.org TH1520" into your search engine of choice to see what hardware is in the process of being upstreamed to the bleeding edge Linux kernel-next. Which is probably about a year away from being in a longterm suported Linux kernel.
Yes there are some SBCs around if you want to experiment. Pine has one line of boards now.
I'm hoping for desktop grade stuff instead. I don't mind if the CPU is soldered but it would be nice to have memory and pci-e expansion slots. Something to match the POWER Raptor machines as open source but not as expensive.
Anything based on the rk3588 like the radxa rock 5b. Just be careful to note the distinction between the rk3588 variants, as some have pcie 3 and some only have pcie2.
I’ve noticed a lot of projects use riscv64 as shorthand for the rv64gc ISA, especially when it’s understood that there is an MMU, maybe an FPU, privilege levels, and any other extensions that a modern OS requires.
Anyway, I did find this on the Debian website[1]. So it seems they mean rv64gc.
Presumably Android will also chose a base minimum ISA spec for riscv software at some point. It would be tidy if they match the regular linux ecosystem
It has not been decided. There was a talk about the details just 2 weeks ago on the RISC-V International Youtube channel. Very interesting and recommended.
Would anything prevent compilers from approaching it like SSE on Intel? Check for feature presence and enable the appropriate path (if using compiler-generated code).
That is the hard part. On Intel you can use CPUID, but it is ARM policy to not expose such instructions. You can read /proc/cpuinfo, but that is Linux-specific.
Edit: there is a reason for ARM policy: CPUID is a well known virtualization hazard. In fact, KVM immediately traps if you execute CPUID on guest. ARM made a good decision here. Still, it means things can't work exactly like it worked on Intel.
The Arm Linux kernel allows you to use some of the "read ID register" instructions from userspace, because it traps them in the kernel and emulates them to present you with a slightly sanitized view of the available hardware: https://www.kernel.org/doc/html/v5.8/arm64/cpu-feature-regis...
You can also look at the hwcaps (available in the ELF aux vector) -- this is the older mechanism.
It's true that there's no cross-OS mechanism to do this, but that's life -- often the OS wants to get in anyway to sanitize the answers (eg so it can tell you "feature X is not present" when it knows about a hardware erratum or the OS was built without feature-X support).
On Arm this is generally a bad idea -- there are, or were, some corner cases where the kernel can know that an extension shouldn't be used, but it doesn't have a mechanism for "make the instructions UNDEF". The example I know about is ancient history now -- on the Cortex-A8 I think you could build a kernel without Neon support or perhaps the kernel might find there was a Neon-related erratum, but there was no way to disable Neon to force the UNDEFs.
The recommended approach is to use HWCAPs, or else to use the kernel's "emulated ID register accesses" functionality.
Debian distributes compiled binaries. Thus, they have to either turn processor features on in all their binaries, or off (or distribute two sets of binaries).
If there is a downfall of RISCV, it's going to be the myriad of sub-architectures and their weird naming schemes, with an added layer of marketing drones muddying up the waters even further.
This is going to confuse the hell out of the potential customer base and it would be a crying shame, wasting a golden opportunity to get out of the oxygen-choking clutches of proprietary ISA vendors (who at this point are purely rent-seeking and add exactly zero actual economic value to the ecosystem).
I mean, in the very least the proprietary ISA vendors provide the service of selecting what extensions are added. If you think too many extensions will take down RISC-V, that seems to have value.
RISC-V will do the same thing. That's what profiles and even further platforms will provide. Most software (the large investment) will target these profiles.
Not following the profiles will be possible but will cause issues.
I don't think so, in practice since atomics (on single-core systems especially) and floats can easily be emulated. If larger, more-incompatible extensions become commonplace, possibly.
When Debian is ported to a new architecture does that mean something special needs to happen to all of the packages in the repository for them to work on that new architecture?
Well-written applications will just work. Badly-written applications may require patches or be impractical to build at all, and some system software may not make sense on the architecture at all.
There's a whole lot of build infrastructure for Debian; by default the package maintainer doesn't need to do anything specific, building for all architectures will happen by default.
I wouldn't go as far as telling something is badly written just because it's written to work on one platform. It can be an exceptional application even when it only works using a single platforms' intrinsics. Similarly a badly written application may work on many platforms
Othar than word size (eg. i386 vs. x86_64), endianness (eg. PPC vs. x86), alignment (eg. ARM vs. x86) there's plenty of other low-level issues like parameter passing conventions, call frame handling, exception handling, presence or absence of optional instructions or registers, and a number of other subtle gotchas that could affect a port. Instruction probing is one of the ones we run into where everything builds find and fails at runtime.
If you're working at a high enough level you're probably fine. Like the difference between being an auto mechanic and using the app to hail a ride.
Debian packages declare their supported architectures. "all" means it is architecture independent and will be available to a new architecture without rebuild. "any" means it is portable and will be available to a new architecture by rebuilding. In both cases no source change is needed.
Special packages (for example, compilers generating native code) enumerate supported architectures. For these packages, source change is needed, at least adding the name of the new architecture to the enumeration. But usually that's not the difficult part.
Maybe some packages may need special attention, like various language runtimes and browsers that normally do JIT compilation to the target architecture. I guess some of these might work in interpreted mode, but not as efficiently.
This is the mechanical progression for risc-v: legacy support.
The real future is riscv64 assembly written software with a very conservative usage of a preprocessor (or we will end up with c++ syntax grade preprocessing), and software written with very high level languages which interpreters are written in riscv64 assembly with a very conservative usage of a preprocessor (python? javascript? etc). That because risc-v is meant to be the CPU ISA standard.
Like always, it depends. The hardware section on the wiki lists some device specifics¹, and if you follow the OnDebian² links there are a couple that look like they might work-ish out of the box. Presumably -- or perhaps hopefully -- things will improve quicker with mainline Debian as an option and hardware becoming more available.
In the context of Debian, let us hope people aren't flashing a BSP³ in the hope of improved board support ;)
i386 below arm64 is an interesting surprise. Seen some Debian developers mention struggles keeping 32-bit x86 going but didn't think it would have dropped that much.
a true milestone! hope it is also the beginning of the end - ISA shouldn't be some tightly controlled assets managed by some lawyers and CXOs, companies shouldn't be sued for just trying to invest into certain ecosystems.
OpenSource boards have been a thing for far longer then open SoCs. So of course most of the board in the world will be proprietary and that fine. But open boards has been possible for a while, and the open source tools to do that are fairly good.
RISC-V makes it possible to have Open CPU or even OpenSOC on an OpenBoard. That basically wasn't even possible before. We are even getting to a point where there are open PDKs. There are libraries of analog IPs that are open. Now that there are lots of options for OpenCpu people are going after the missing stuff. Open Ethernet PHI and so on. There is also now a big movement in regard to making the tooling open source to compete with commercial tooling. OpenROAD, OpenLane, Verilator and so on.
The benefits of this are amazing. Project like Core-V, OpenTitan and so on simply wouldn't have happened.
You have things like RISC-V International, the Chips Alliance, LowRisc, Google and many others working on all aspects to make a fully open ecosystem possible.
But that doesn't change the fact that most SoC and most board in the world will be property, RISC-V or no RISC-V.
> RISC V is not extension free
Not sure what this means. Exertions are part of RISC-V. The waste majority of extensions used in the real world are all part of the same standard put forward by RISC-V International.
Most of the non-standard extensions came about before there was a standard for them, and most are converging to the standard pretty fast.
No it isn't. PowerPC was never free. I think you mean OpenPOWER. Again that was just marketing, it wasn't free in anyway. Only the pressure of RISC-V lead to POWER and MIPS becoming more free over time.
And even if OpenSPARC had a open spec, but was only for 32bit. The 64 bit version would never become a standard. You need more then air dropping a spec. You need people to continue to push it forward and evolve it. You need to build a community around a standard. Sun simply wasn't interested in doing that, they just made a spec and left it there.
OpenSPARC had some success, being used for space by ESA for example. But it didn't get adoption in education. The venders of embedded chips were not interest. But the time was just not ready, the OpenSource ecosystem wasn't ready and the publisher (Sun) didn't care about embedded. The seperation between chip makers and fabs had not fully happened yet.
Saying that RISC-V is like POWER is just fundamentally false.
Saying RISC-V is like SPARC 32-bit is slightly more correct, but misses a whole lot context that is non-technical.
Yes we should give Sun credit for what they did with SPARC. But RISC-V is quite a different thing.
That's just wrong and a simply depressive outlook for life. Saying things are only valuable if its for literally everybody just the wrong outlook.
Having the ability to make open end to end things is valuable even if its not in every product. Linux shows up in lots of places where people are not aware and that is a good thing. The same thing will happen with RISC-V.
And for people who like open, there will be product for those people. There are companies working on open board devices, laptops, boards and so on. And in the future maybe those could be open chip. The costumers of those products will not care if the PC at Media Market is a propriety computer with Windows on it.
Your attitude would basically be that 'its hardly different' when OpenSource started to grow. But it was and is actually different for a lot of people and for the industry.
Those people don't care one bit about how open their blue ray happens to be, even though Sony has based it on Linux and hardly contributed the changes to upstream, as means to reduce costs.
It has nothing to do with dreams. I have an open hardware product with a RISC-V CPU on my desk right now. And its nice to have. I don't give a single shit if some random device from Media Market is open. The device I have provides real value to me now and that is a good thing.
The idea that things are only relevant when media market costumers can buy them is just sad and depressive.
Average Joe and Jane still aren't buying Raspberry Pis, that doesn't mean the Pi didn't change hobby and DIY computing for the enthusiasts. Is your definition of success complete adoption? To me a 100% open phone that handles calls and data would be a huge win. I don't even need a camera, I don't use it. I don't care what the masses do, they will follow when the day comes just like they always do the mindless zombies that they are.
Getting Debian officially supported is one of the necessary steps towards that goal.
Fujitsu has announced plans to transition away from SPARC by 2030. Oracle has laid off their staff, after the M8 in 2017. (
https://en.m.wikipedia.org/wiki/SPARC ) If SPARC isn’t dead, then it’s in palliative care, and not far behind Itanium.
Shame because SPARC T series was interesting and I wonder what it would look like today at modern geometries and using chiplets.
The R&D (and probably all) expenses and difficulty involved in SOC board manufacture and design are orders of magnitude lower than the R&D expenses, difficulty, and gatekeeping required to design and manufacture that very same CPU.
I can't a realistic scenario see Apple moves off Aarch64 at this point for their MPU. They're having boatloads of success with what they've done there.
But I can imagine there's interest on their part in using something like RISC-V in supporting chipsets; modems, power mgmt, disk controllers, etc.
That and having something in their backpocket "just in case."
Apple is the only company with a special licensing deal with ARM that lets them do whatever they want and develop their own ARM cores without relying on ARM IP or premade designs like everyone else. They've also sorted it out financially where they're probably not incurring incremental costs, though I think the details are secret.
As open source options mature & become more plentiful, it simply becomes more 'painful' for companies to ignore / go a closed sourced route.
Closed source = extra cost, licensing red tape & so on. However small, it's a bother.
Need an OS? Grab open source one. Want that + 'secret sauce'? Grab BSD flavour OS & add your secret sauce. Want OS with custom user interface? Grab open source OS & add custom UI shell.
Going closed source for anything where open source does the job, just means extra development effort, or paying 3rd party $$ while (often) not necessary. And eg. $0.05 royalties / device matters when it's a part with $3 production cost.
On the hardware side this is a much slower moving train. But same principle.
There's a reason people pay for ARM tools. There's a reason people pay SiFive and StarFive to use the rocket core generator for them. (heck, there's a reason I still use WDC65C02's in new designs.)
While I'm generally a fan of open source software, open source hardware is a different beast. The cost of copying an open source hardware design is noticeably higher than the cost of copying open source software.
You always pay for what you get. If you're lucky, you pay in cash.
We already have riscv64-linux builds of Zig enabled with every CI run[1]. Happy to see that they might finally be utilized by more users soon.
[1]: https://ziglang.org/download/