ASUS Announces Mini PC PL64 with Fanless Design and Up to a 12th Gen Intel Core i7 Processor

Tsing

The FPS Review
Staff member
Joined
May 6, 2019
Messages
12,634
Points
113
ASUS has announced the Mini PC PL64, a compact and reliable mini PC built for indoor industrial applications that is powered by up to a 12th Gen Intel Core i7 (15 W) processor and features a fanless design that ensures optimal performance, even in ambient temperatures of up to 50°C.

See full article...
 
Is kinda neat, too bad you're stuck with Xe igpu. I would love something this size for the living room, but would need to be capable of actual gaming.
 
Is kinda neat, too bad you're stuck with Xe igpu. I would love something this size for the living room, but would need to be capable of actual gaming.
Hopefully AMDs next round of APUs are fully up to par; I'm not expecting RT of course, but everything else in the 7000-series GPUs, plus hopefully a DDR5 controller than can do 8000MT/s.

That said, Intel's GPUs are nothing to scoff at, in my personal experience. For the kinds of games that you'd be willing to suffer through on an IGP, they perform really quite well, and their ARC drivers for various popular titles are improving in leaps and bounds. Hopefully that translates into better IGP drivers as well.
 
Hopefully AMDs next round of APUs are fully up to par; I'm not expecting RT of course, but everything else in the 7000-series GPUs, plus hopefully a DDR5 controller than can do 8000MT/s.
That would be nice, but AMD's APUs were stuck on Vega for some reason for so long - the last gen (Rembrandt) just made the jump to RDNA2.
 
If those two Ethernet ports are Intel chips (and not some garbage like Realtek) this could be a great pfSense box.

Hard pass on the integrated wifi though.
 
If those two Ethernet ports are Intel chips (and not some garbage like Realtek) this could be a great pfSense box.
For 2.5Gbit, Realtek actually has a better record - for consumer NIC ICs. Intel's i225 has been a real pain even up to the last generation (Z690 / X570 / B550 / etc.). This includes with *nix, though you'd have to check on FreeBSD and pfSense support.

Hard pass on the integrated wifi though.
If you don't need it, can always yank the card out! Can even use the slot for a limited set of alternatives. Personally, I'd rather have it available than not, as adding those afterward into their dedicated slots is a real pain sometimes.
 
If you don't need it, can always yank the card out! Can even use the slot for a limited set of alternatives. Personally, I'd rather have it available than not, as adding those afterward into their dedicated slots is a real pain sometimes.

Wasn't aware of Intel's issues with 2.5gig, but in that application I'd rather have something tried and true. Intel gigabit would be perfect. Not really going to need anything above a gig in most places. At least not yet.

I don't even understand why they had to go an invent the ridiculous 2.5gig ad 5gig standards when a perfectly acceptable 10gig standard was already in existence. It's kind of annoying.

Also, one thing I've never understood is why there are so many ATX and E-ATX motherboards with 10gig NIC's on board, but I can't find a single Mini-ITX board that has them.

You'd think a Mini ITX board would have more spare lanes with which they could support the bandwidth, what with not being able to fit a second PCIe slot and all. And it's also more useful on a Mini-ITX board, because you don't have the PCIe Slots to add one yourself!
 
Wasn't aware of Intel's issues with 2.5gig, but in that application I'd rather have something tried and true. Intel gigabit would be perfect. Not really going to need anything above a gig in most places. At least not yet.
Well, 'tried and true' would be like an Intel i350 (1gbit) or X550-T2 (10Gbit + 2.5/5Gbit). Their consumer lineup is just that; though the i225 is a bit of an outlier in that it took them three releases of it to get something that was more or less considered stable. The release of the i226 I think is meant to put a wrap on the issue.

I don't even understand why they had to go an invent the ridiculous 2.5gig ad 5gig standards when a perfectly acceptable 10gig standard was already in existence. It's kind of annoying.
I shared the same opinion initially. However, 2.5Gbit and 5Gbit are far more 'accessible' than 10Gbit; they both run on CAT5e and have lower power requirements, and are being used for things like access points as well as internet connections. The fiber modem I have now has a 5Gbit port, and when I had the service level from the ISP set to 2Gbit, I was able to regularly hit 2.5Gbit up and down - and that using an Intel 2.5Gbit NIC (or Realtek, even via USB3).

Realistically, 10Gbit support for the commoners is going to be a long time coming.

Also, one thing I've never understood is why there are so many ATX and E-ATX motherboards with 10gig NIC's on board, but I can't find a single Mini-ITX board that has them.
I share that annoyance, especially given that you can get ITX boards with TB4 / USB4, which is harder to implement. If I could get an ITX board with both TB4 / USB4 and 10Gbit, well, I probably wouldn't need anything else!

You'd think a Mini ITX board would have more spare lanes with which they could support the bandwidth, what with not being able to fit a second PCIe slot and all. And it's also more useful on a Mini-ITX board, because you don't have the PCIe Slots to add one yourself!
I'll say that I've looked into using an M.2 NVMe slot adapter to add a 10Gbit NIC; I haven't actually tried it yet as it would require some modding for any ITX build I have, but I know that it's possible.
 
Realistically, 10Gbit support for the commoners is going to be a long time coming.

I hear you, but I am still not quite sure why.

We went from 10mbit -> 100Mbit -> Gigabit Ethernet being accessible to consumers in ~10 years. Each of those was a factor of 10 increase.

If we had kept up the same speed, the transition to 10gig being the norm should have been complete like 15 years ago. Large volume enterprise copper 10G NIC's have been around since - what - 2006-2007 at this point? It shouldn't have taken THAT long for them to work their way into the consumer space. At least it didn't in the past.

The cause is probably the predominance of Wireless LAN among consumers. For better or worse, that's what consumers have demanded, so instead of newer, better and faster wired standards, we have received a series of underperforming wireless standards.

I share that annoyance, especially given that you can get ITX boards with TB4 / USB4, which is harder to implement. If I could get an ITX board with both TB4 / USB4 and 10Gbit, well, I probably wouldn't need anything else!

Yeah, ever since AMD declined to make a Zen3 HEDT Threadripper I ahve been planning on what to do next. I could drop down to consumer ATX or EATX boards, but then I'd to find a board with at least two PCIe slots capable of operating at 16x (gpu) and 8x at the same time. All of the ones I can find drop down to 8x/8x if you use the second slot. I wouldn't even care if the 16x GPU slot is Gen5, and the 8x slot is Gen4 or even Gen3. This should be doable with PCIe switching, utilizing the Gen5 bandwidth and splitting it off to a larger number of Gen4 lanes or even Gen3 lanes for legacy devices, but no one seems to be doing this for some reason.

My second option is to just keep the Threadripper indefinitely as my workstation board (I don't really need that much power in my workstation, the expansion is more important, so if it ages, that is fine) and use the "two motherboards" feature of my Corsair 1000D to install a second Mini ITX board just for games.

I'd really want to do some local PC to PC networking in order to share storage and other things, which means, I'd very much want it to have at least 10gig Ethernet. (25gig would be even better! I'd love to do an SFP28 direct attach copper cable between them) Modern m.2 drives can definitely take advantage of that bandwidth. I just can't find any that have 10gig ethernet.


I'll say that I've looked into using an M.2 NVMe slot adapter to add a 10Gbit NIC; I haven't actually tried it yet as it would require some modding for any ITX build I have, but I know that it's possible.

I have looked in the past, and I can only find one such product, the Innodisk EGPL-T101, and I ahve only found it for sale in one place, by one seller on Amazon, and the pricing is obscene. $699 for a NIC? There has to be abetter way.
 
I hear you, but I am still not quite sure why.

We went from 10mbit -> 100Mbit -> Gigabit Ethernet being accessible to consumers in ~10 years. Each of those was a factor of 10 increase.

If we had kept up the same speed, the transition to 10gig being the norm should have been complete like 15 years ago. Large volume enterprise copper 10G NIC's have been around since - what - 2006-2007 at this point? It shouldn't have taken THAT long for them to work their way into the consumer space. At least it didn't in the past.
Two things: 10Gbase-T requires significantly more power, and new cabling. And since internet connections really aren't widely available over 5Gbit, well, there's just not much demand; so cost and limited demand put a damper on adoption, especially with inexpensive solutions from Intel and Realtek that do 2.5Gbit.

The cause is probably the predominance of Wireless LAN among consumers. For better or worse, that's what consumers have demanded, so instead of newer, better and faster wired standards, we have received a series of underperforming wireless standards.
I'm using the ISPs fiber modem / router / access point (until I get brave enough to install the UDM Pro...), and I've gotten 700Mbps up and down, across the house, with an iPhone 10. The ITX system in my living room next to the bedroom where the access point is? 1Gbit up and down. That's on WiFi 6, 5GHz.

I get the reservations for WiFi for gaming use, and by my testing those are still valid. Bandwidth aside, connections just aren't stable for all types of gaming, and I still get ping spikes and packet drops despite the bandwidth being present. But for everything not gaming? No issues.

Yeah, ever since AMD declined to make a Zen3 HEDT Threadripper I ahve been planning on what to do next. I could drop down to consumer ATX or EATX boards, but then I'd to find a board with at least two PCIe slots capable of operating at 16x (gpu) and 8x at the same time. All of the ones I can find drop down to 8x/8x if you use the second slot. I wouldn't even care if the 16x GPU slot is Gen5, and the 8x slot is Gen4 or even Gen3. This should be doable with PCIe switching, utilizing the Gen5 bandwidth and splitting it off to a larger number of Gen4 lanes or even Gen3 lanes for legacy devices, but no one seems to be doing this for some reason.
SLI is dead, CFX is... probably dead? That's the long and short of it. The extra expense for PCIe switching, especially for PCIe 5.0 (despite its lack of uses), is just unpalatable given the lack of market demand.

I don't think I've seen any modern consumer boards support an x16 and an x8 slot simultaneously. Lots of other crazy stuff (cue ASUS and Gigabyte high-end boards!), but not that.

My second option is to just keep the Threadripper indefinitely as my workstation board (I don't really need that much power in my workstation, the expansion is more important, so if it ages, that is fine) and use the "two motherboards" feature of my Corsair 1000D to install a second Mini ITX board just for games.
I'd tell you to just build a separate gaming system. You really, really want to have space to properly cool a modern CPU and GPU. Can still be relatively small, just not that small, IMO.

I'd really want to do some local PC to PC networking in order to share storage and other things, which means, I'd very much want it to have at least 10gig Ethernet. (25gig would be even better! I'd love to do an SFP28 direct attach copper cable between them) Modern m.2 drives can definitely take advantage of that bandwidth. I just can't find any that have 10gig ethernet.
What I've looked into has been those ribbon-based PCIe x4 risers that are M.2 on one end and PCIe on the other, basically converting an M.2 NVMe slot into a PCIe slot, and then hooking up a 10Gbit (etc.) card with that. Many ITX boards put one M.2 slot on the top and one M.2 slot on the back of the board, so that back slot I'd try to figure out how to route somewhere accessible.

Obviously the integrated solution is there for those that'd rather pay to have that done for them and likely for support, but I think the DIY route is possible, and certainly far cheaper.
 
Two things: 10Gbase-T requires significantly more power, and new cabling. And since internet connections really aren't widely available over 5Gbit, well, there's just not much demand; so cost and limited demand put a damper on adoption, especially with inexpensive solutions from Intel and Realtek that do 2.5Gbit.

Do people really only use their home networks to access the internet?

I probably have 100 times or more traffic that stays 100% local on my LAN than I have traffic that does out to the wan.

That seems like such a failure of imagination. There is SO much more to networking than just "internet".

I'm using the ISPs fiber modem / router / access point (until I get brave enough to install the UDM Pro...), and I've gotten 700Mbps up and down, across the house, with an iPhone 10. The ITX system in my living room next to the bedroom where the access point is? 1Gbit up and down. That's on WiFi 6, 5GHz.

I get the reservations for WiFi for gaming use, and by my testing those are still valid. Bandwidth aside, connections just aren't stable for all types of gaming, and I still get ping spikes and packet drops despite the bandwidth being present. But for everything not gaming? No issues.


I'll admit, I haven't played with the latest wifi standards. Everything in my house is still at 802.11ac. Because I don't do any heavy lifting with wifi, I have had no reason to upgrade. Phones, tablets and sometimes laptops access the wifi access points, but everything that is stationary (including all the big stuff) is either copper or fiber.

That said, back in the b/g/n days I had nothing but trouble. The box may have said 600Mbit/s but in reality even if you held the device a couple of inches from the antenna of the router, you got about 10mbit. This was likely due to living in a very dense neighborhood with lots of 2.4ghz congestion (other routers, cordless phones, malfunctioning microwave ovens, you name it) but it was really bad.

Congestion, walls, EM interference, the fact that the standard is flawed and only half duplex, and - of course - the inherent security issues with broadcasting your network over the air just turns me off from wifi completely.

While it could probably be faster if you live in the boonies away from other people, ever since that experience I just don't trust anything wireless. I keep it as locked down as possible on a separate VLAN, and use it only for trivial light stuff that I don't care about and which has to be mobile. If something is stationary, come hell or high water it gets a wired connection. I don't care if I have to move heaven and earth to make it happen. No way a stationary device ever uses wifi in my house. It is against my religion :p

SLI is dead, CFX is... probably dead? That's the long and short of it. The extra expense for PCIe switching, especially for PCIe 5.0 (despite its lack of uses), is just unpalatable given the lack of market demand.

I don't think I've seen any modern consumer boards support an x16 and an x8 slot simultaneously. Lots of other crazy stuff (cue ASUS and Gigabyte high-end boards!), but not that.

That's the part that bugs me. I don't do SLI I tried CFX once in 2011. It sucked, and I quickly moved back to single GPU. Then I tried SLI in 2015. It also sucked, and I moved back to single GPU.

SLI/CFX is not the reason for having ample expansion.

What if I want to drop in an extra NIC or two? Or a storage controller card? Or a capture card? Or additional USB ports? or any number of things?

PC's used to be about flexibility. You install the hardware in the slots that you want or need for your application.

Who knows what delicious potentially really awesome hardware expansion never happens simply because PCIe expansion is now bviewed by many as "just something you use for your GPU". Something like PhysX could never happen today. They wouldn't have a capable enough port of installing themselves in.

USB keeps getting faster and faster, but it is still an unreliable underperforming mess and should never be used as an alternative to proper ISA/VLB/PCI/AGP/PCIe expansion ports.

The last few generations of consumer boards have really destroyed the core of what the PC is, a general purpose highly modular computing platform that is infinitely customizable.

Instead we find outrselves in a living hell of dangling external USB devices that can disconnect if you look at them wrong.

Because of this, the market for PCIe expansion devices is tanking, outside of enterprise, and that is is awful.


I'd tell you to just build a separate gaming system. You really, really want to have space to properly cool a modern CPU and GPU. Can still be relatively small, just not that small, IMO.

Cooling is not an issue for me. Anyhting would wind up receiving water blocks dn being router into my loop with overkill radiator capacity.

What I've looked into has been those ribbon-based PCIe x4 risers that are M.2 on one end and PCIe on the other, basically converting an M.2 NVMe slot into a PCIe slot, and then hooking up a 10Gbit (etc.) card with that. Many ITX boards put one M.2 slot on the top and one M.2 slot on the back of the board, so that back slot I'd try to figure out how to route somewhere accessible.

That is also an option. Unfortunately most decent 10gig and 25gig PCIe adapters I have seen are 8x.

And that's a shame. If you run the numbers even a Gen3 4x slot should be enough for a 25gbit adapter, providing ~30.78Gbti worth of bandwidth.

Early on I thought they just made them 8x, for the benefit of people who use older systems, so they could fall back on older generation PCIe and still get the required bandwidth, so I figured, in theory, you ought to be able to just install them in a 4x slot and as long as you provided enough bandwidth they would be fine.

I have done some limited testing with this (granted on really old NIC's like the 2007 era Intel 82598EB chip). In Theory a 4x Gen 2 connection should have provided 15.625 Mbit/s, full duplex, more than enough for a single 10mbit port, but in practice it didn't work at all. The card threw a hissy fit when it didn't have all 8 lanes.

Now, who knows, maybe newer designs work better, but I did some preliminary research by asking the question on the servethehome forums, and it seems like others have had similar problems with 10 gig and 25gig cards in 4x slots. I didn't have a ton of responses though, so I do not confidently feel like I am sure it won't work.

I'd experiment myself, but 25gig adapters still arent quite cheap enough to just buy on a whim for a test.

A 25gig direct connection two systems would be perfect. I could use a ZFS pool constructed from the many m.2 slots on the Threadripper to provide additional storage on the dedicated gaming box via 25mbit ethernet. We are talking ~3GB/s, which ought to be plenty.

If you do test it yourself, I'd love to hear your results.

Edit: As I am writing this, I did another eBay check. There are suddenly a lot of Mellanox 25gbe NIC's on eBay at about the $100 "buy it now" pricepoint, which isn't too bad. They must just be old enough to start being decommissioned in large numbers now. I don't really have any experience with Mellanox though.

My first jump into 10gig networking in 2014 didn't go too well. OI bought a few Brocade BR1020 controllers and I had nothing but trouble with them. After a few years of dealing witth poor and intermittent performance, I bought a bunch of Intel X520 SFP+ NIC's, and they have - in conjunction with proper Intel and Finisar modules (I understand Intels modules arejust rebranded Finisar modules, which is why I tried them) things "just worked". I'd be a little hesitant to experiment with non-intel NIC's again...

The Intel 25gig nics still cost a little more though...

Oh, and I might have to upgrade the wole server, as the quad port ones I am finding (E810-XXVDA4) are 16x PCIe Gen4, and my server is Gen3. Again, unless they are 16x gen 4, so that they can provide adequate bandwidth both over Gen3 (16x) and gen4 (8x) but buying it and finding out is a risk, especially when they are going for $600 used on eBay.

According to my calculations:

Configuration
Gbit/s
16x gen 4​
252.16​
8x gen 4​
126.08​
16x gen 3​
123.125​
8x gen 3​
61.5625​

I see no reason why a quad 25gig adapter should require 16x Gen4, unless it has monstrous overhead.


This is all of course unless I can live with the dual port XXV710-DA2's which according to Intels ark page are 8x gen3...

And I might be able to.

Another option is to check to see if there are any 100Gbit QSFP28 adapters I can use a four port breakout cable with... Maybe some of those are Gen3... A quick search results in a few HPE Intel based nics with a single QSFP28 port, and they are 8x Gen3 (which suggests they are a blocking bandwidth starved design) but the asking price is an obscene $720 per nic. Other sites have them listed as selling refurbed for $399 not that long ago, so I don't know what is going on there. Popular demand driving up prices?
 
Last edited:
Do people really only use their home networks to access the internet?

I probably have 100 times or more traffic that stays 100% local on my LAN than I have traffic that does out to the wan.

That seems like such a failure of imagination. There is SO much more to networking than just "internet".
Honestly... my home is not that exciting. Apart from some local computer backups and a few mp3s/movies that get streamed locally via Plex - the vast, vast majority of my household traffic is via the Internet.

The backups, on occasion, could make use of something faster - but those are so infrequent and low-impact they don't need to be any faster, and mostly occur automated overnight - it's only when I do a new backup or need to actually go in and do something with the backup that I even really notice they are often large archives. The movie streaming could work just fine on 100Mb.

I can't really think of anything that I could run at home (that is of any use or import to me) that would really stress a 1Gb network, let alone a 10Gb: other than the connection to the internet. Maybe it is just my lack of imagination though.
 
Honestly... my home is not that exciting. Apart from some local computer backups and a few mp3s/movies that get streamed locally via Plex - the vast, vast majority of my household traffic is via the Internet.

The backups, on occasion, could make use of something faster - but those are so infrequent and low-impact they don't need to be any faster, and mostly occur automated overnight - it's only when I do a new backup or need to actually go in and do something with the backup that I even really notice they are often large archives. The movie streaming could work just fine on 100Mb.

I can't really think of anything that I could run at home (that is of any use or import to me) that would really stress a 1Gb network, let alone a 10Gb: other than the connection to the internet. Maybe it is just my lack of imagination though.

While I have lots of small and slow things that gigabit local networking could handle just fine, the high speed stuff is mostly for NAS access.

I'll often dd a disk image to my NAS as a backup, like before a re-install, in case I forget something or screw something up. Dumping a 1TB or 2TB drive over the network to my nas is a lot more tolerable at 1200MB/s than it is at 120 MB/s. Even at that speed a 1TB drive takes about 14 minutes though. I'd love to step it up to 25mbit. Also managing the large files in the media library is more convenient when I have more bandwidth.

The fiber cables I have run are all OM3 and should handle SFP28 modules just as well as they do the SFP+ modules I already have so it ought to be a drop in replacement. I keep eyeballing the quad (for the server) and single/dual port (for the testbench and workstation) 25gbit NIC's on Ebay. One of these days I'll pounce and do a drop in replacement.

The uplink to the switch (currently at 10gig) will unfortunately stay the same though, as a 25gbit switch is still a bit rich for my blood. I only get these things either decommed from enterprise users or from Mikrotik new. Mikrottik work some sort of magic to make thm more affordable, but last I checked, they didn't have any SFP28 products yet.

Regular streaming is really low bandwidth. The last six feet is all gigabit there. Usually the chain looks something like this.

Server --10gig DAC--> Main 16 port 10gig fiber switch in rack --10gig--> 10gig uplink port on mostly copper switch (one per floor) --gigabit copper--> streaming box.

In my case my streaming boxes are little Odroid devices running Core-Elec these days. Much more trustworthy than a "smart' device that is spying on you.

Don't get me wrong, I understand full well that most people don't have racks and servers in their basements and that I am a bit of an outlier in that regard, but I still find it sad that the LAN has devolved into just an extension of the internet for most people.

This will probably just continue as IPV6 becomes more and more commonplace, and people eschew other protocols. The IPV6 mantra of "everything everywhere globally 1:1" really is something I don't care for. I like the network of locally controlled independent networks philosophy better than one big jumbled mess.
 
Last edited:
Become a Patron!
Back
Top