Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Keylogger in Hewlett-Packard Audio Driver (modzero.ch)
492 points by ge0rg on May 11, 2017 | hide | past | favorite | 116 comments


Actually, the purpose of the software is to recognize whether a special key has been pressed or released.

I'm doubtful of the utility of software like this. Every driver and application seems to want to keep a persistent background process running, and because of the natural inefficiency of software (this executable is ~2MB --- why it needs to be this big, I'm not certain; from a brief inspection, all it seems to be doing is controlling microphone mute/unmute), results in a huge waste of resources and new computers which appear no more responsive and than older ones.

However, to put the severity of this problem in perspective, from the description this is not like a typical keylogger that sends keystrokes out to some remote server; it only logs locally.

If you regularly make incremental backups of your hard-drive - whether in the cloud or on an external hard-drive – a history of all keystrokes of the last few years could probably be found in your backups.

There's going to be plenty of other sensitive information in your backups, which if you don't want others to read you would use encryption anyway, in which case the point is rather moot.

Any process that is running in the current user-session and therefore able to monitor debug messages, can capture keystrokes made by the user.

...or it could just monitor the keystrokes itself with SetWindowsHookEx() like this process.

Thus, I think the correct reaction to this is more towards the "oops... that wasn't a good idea" than "everybody panic!"


> There's going to be plenty of other sensitive information in your backups, which if you don't want others to read you would use encryption anyway, in which case the point is rather moot.

This is a bit of a strawman. When you're backing things up, you know what you're backing up and chose to do so. Here you'd be backing up things that you didn't want to, or even worse, things you'd never want to be backed up anyway. If someone gets into my backups, maybe they can see some family photos or financial data...but they wouldn't otherwise be able to see all the porn searches I do in incognito mode. With this, they could potentially access that as well.


I think Windows actually has a built in "Backup Computer" feature, which AFAIK is a complete image backup. Alot of the cloud backup products do entire computer too (like Time Machine does for OSX). The convenience factor of a system backup for a non-technical user (someone who struggles with explorer.exe) is pretty great.

That being said, keylogging is just plain horrible and inexcusable. Passwords, searches, private messages, etc. There's no way we should be cutting them slack on this.

The worst part is, it's crapware like this that steers us towards the walled app store model for PCs... and the loss of freedom that accompanies that.

Every time I hear of stuff like this I gain more respect for Stallman.


> Thus, I think the correct reaction to this is more towards the "oops... that wasn't a good idea" than "everybody panic!"

"Oop..."? - Is that what you'd write on the graves of dissidents whose efforts to communicate securely were subverted by a keylogger in an audio driver?

Too much over the top? The HN-compatible version: Actual reverse engineering can be punished by law in most of the first world (through the illegal status of tools that can be used to circumvent access restrictions). Of course, this is close to impossible to prove when all you publish is the security advisory and thus nobody really cares. "Oops..." someone accidentally just made a recording & backup of all the evidence against you :)

There is no "correct" reaction, only individually justified reactions and if you can afford to just say "oops..." you may consider yourself lucky.


"Oop..."? - Is that what you'd write on the graves of dissidents whose efforts to communicate securely were subverted by a keylogger in an audio driver?

A smart dissident wouldn't be using a Windows laptop to write things that could get themselves killed.

It's not a diss on Windows, but rather basic opsec: if you want to be secure, you need to know what every part of your system is doing. Or at least rely on the fact that open source maintainers have examined every part of the system.

One basic protection is to boot into Tails, where an audio driver like this won't have an effect.


Under authoritarianism, one's status as a dissident exists entirely at the whims of the regime. If you're recommending everyone use Tails after the pattern of someone like Stallman, I get your point. Otherwise, you're putting some absurd standard on people who might be otherwise be incredibly courageous but lack technical understanding to a degree that seems like victim-blaming.


It's an observation that if you're using a stock Windows installation to post inflammatory remarks about the regime you live under, you're gonna have a bad time. An audio driver logging your keystrokes is 0.1% of your concern at best.

Of course dissidents are courageous and deserve sympathy. That's not quite the point, though.


Yes, the victims should have known.

I strongly suspect (and hope) that you are just trolling, so in a very short form:

* "It's an observation" that you eat babies. This statement is just as valid as yours. I mean, "it's an observation" (tm). Read: Well, duh, yeah, whatever you say. Come back when you have a point.

* "posting inflammatory remarks" is not activism. Activists have no time for this BS, they do actual political work.

* Spend a few weeks in a developing country before you give advice to people who live there. Here in my house I can get you an apartment for $200/mo (2 bedrooms, hot water, gas, electricity and "internet" inclusive). But be aware: Best phone you can purchase in a 3h-radius runs Android 4; next place to buy a laptop that halfway works as expected with a vanilla linux distro: 6 to 10h driving. Modern hardware costs at least twice as much as in 1st world.

* Before talking, take over responsibility for OPSEC in a place like mine: Nobody within hours that can help someone with a problem with tails but me (well, you, since you just volunteered); next linux-expert? At least a day's travel. Still want to advise for a non win/osx-solution? I'm looking forward to hear why and how this should work.

Too much work? Well, then at least show me that at least half of your friends and family adhere to the standards you expect from unknown people who live in circumstance you obviously have no clue about. Until you do that: You're trolling.

PS: I've been teaching OPSEC to activist since 1996. I'm using Linux since 1997 and FreeBSD since 1999. I happen to know a thing or two about windows as well.


This is just an escalation attack, not one which is remotely exploitable.

You still need access to the machine in order to take advantage of this and by the time you have access to the machine you can run your own keylogger if you'd like. The only increased risk here is the ability to look back into the past.

Which is bad on its own, but let's not exaggerate. This didn't get any dissidents killed. Nor is it likely to.


What about people who encrypt sensitive information locally, then back it up? That's perfectly secure in the common case, as long as you don't accidentally store sensitive information outside of your encrypted containers. But the moment you start backing up comprehensive key logs, anyone with access to the backups can decrypt your stuff.


> There's going to be plenty of other sensitive information in your backups

I have never once backed up the login details for my online banking, or the passwords to my cloud servers, or the passwords to government websites, etc. etc.


> this executable is ~2MB --- why it needs to be this big, I'm not certain

Because writing this in 200 lines against the Win32 API resulting in a 15 kB executable would be far too straightforward. Abstraction is required.


Just wait until people start making services and software-bundled-with-drivers in electron.


Well, this used NW.js: http://www.computerworld.com/article/3018972/security/ransom...

I vaguely recall something that used Electron, but I can't find it. It was 30MB+.


Let's replace that with a tab for users to keep open !

(presses shift-esc, sees this number 91.372K standing next to the hackernews tab. Wonder what that means. Say, the number next to my gmail is like 10 times that. I wonder what that means)


Google search results frequently use 200MB+ on my machine.

It's like the Internet has collectively decided nobody has 2GB of RAM anymore. (Yep.)


Then why use gmail?


One thing I really like about Linux: random platform-specific hardware features like the mic button or whatever this is are handled by an open source "platform" driver in the kernel. These drivers expose a more or less uniform interface to user code.

So, when I install Linux on a laptop, most or all of the weird laptop-specific buttons just work without OEM crapware or runtime performance hits.

The downside, of course, is that you can't just download fresh crapware to make your brand new laptop fully functional. I'll take that tradeoff.


Well said. I have the same sentiment. Gimme Linux (or FreeBSD) and I stay clear of your crapware, re-install my machine less often, and install a new OS+apps much faster.


As a rule of thumb, you have:

  * Decent software companies terrible at making hardware
  * Decent hardware companies terrible at making software
I yet have to see one that does both correctly. Hardware manufacturers are known to produce the worst code quality you can think of, badly designed, poorly written, undocumented, insecure, bloated.

I have the feeling that the whole IoT problem is also related.


> Decent software companies terrible at making hardware

Not sure if one wants to count them as a decent software company, but Microsoft is quite good at building keyboards, IMHO.


Their internal divisions might be all over the place regarding company policies (see: Windows 10 as spyware :) ) and their insistence on keeping strict backwards compatibility with sub-par products (DOS, basically) might be debatable.

But their engineering these days is top notch. They might not be the best but they're probably in the top 10% sofware companies regarding engineering practices, IMO. If not higher.


> and their insistence on keeping strict backwards compatibility with sub-par products (DOS, basically) might be debatable.

All 64 bit versions of Windows can neither run DOS applications nor Win16 applications (I described the technical reason for this at https://news.ycombinator.com/item?id=14246521).


While that's strictly true, there's a wider range of backwards compatibility they're keeping.

See the whole Rust/Cargo problem on Windows. Windows files can't be named con, aux, etc., because the Windows file systems are backwards compatible, generationally, all the way to DOS, which didn't have subfolders initially and which reserved those keywords for special files. Then as DOS added subfolders it still kept that global restriction, then Windows adopted it... and here we are today.

Windows is full of these things, a lot of them coming from DOS and more coming from Win16 or even early Win32.

I know they broke backwards compatibility in the strict sense with Win64, but apart from this not many situations where they did it come to mind. And even for that, they only did it because the overall market for DOS/Win16 was tiny at the point when they did it.


POSIX and its ecosystem is also full of this backwards compatibility baggage (I for example claim that X11 has a lot more outdated backward compatibility legacy than WinAPI - no surprise since it is much older). Just to give a few basic texts about this:

The Unix-Haters Handbook: http://web.mit.edu/~simsong/www/ugh.pdf

A Tale of Two Standards: A text by a person (Jeremy Allison) who understands both WinAPI and POSIX: https://www.samba.org/samba/news/articles/low_point/tale_two...

The reason why people tell this all the time with respect to Windows, but much more rarely with respect to POSIX and its ecosystem, is in my opinion that much more programmers have inhaled the latter.


Concurrently occurring Creation -> Obsolescence produces overlap:

   DOS    Win3.1        Win95                Win10
  
   Creat.             Obsc.
   |---------App----------|
       |---App---|
                     |-----App-----|
  ~1980=========================================~2020
     |-----App-----|
             |-------App-------|
                            |--------App-------->
(Okay, that looked a little better in my text editor.)


Vista (or was it XP) introduced a new security model that made prior apps that depended on a common install location being writable very interesting... They still kind of worked, but data's actual location was then per-user.

Other than that, most windows software runs as it has for a fairly long time now.


CON is even older - it came to DOS from CP/M!


I'm pretty sure that excuse is random bullshit.

First, are handles limited to 16 bits in 32-bits Windows? I'm pretty sure they are not, especially when the 2 lsb are 0 for NT handles...

Second, you could cherry-pick other random resource pointer and make up another excuse with the same power: e.g. "32-bits programs can not work under 64-bits Windows because pointers are 64-bits wide with 'all' (at least > 32) their bits used, so they won't fit in 32-bits pointers of 32-bits programs"

That's not how retro-compatibility involving virtualization is engineered - that's pretty much the other way around! If MS wanted to make Win16 run on 64-bits Windows, they could have engineered it (without much difficulty). They choose not to (and I don't blame them too much for that) and as a result, maybe some (or a lot) of details that would have been needed to ease that implementation do not exist (maybe SOME handles were limited to 16-bits on 32-bits Windows, all the time or when the program was a 16-bits one - and the provision to limit to 16-bits is not there anymore in 64-bits Windows as a result of the lack of purpose it would have given the general lack of support of win16 on this plateform), which maybe makes it harder to do for a 3rd party now... (and even by MS, given Windows has evolved a lot since introduction of 64-bits)


If you really have 16-bit windows/dos software, these days dosemu or WINE may be better options under a VM (Not sure if ReactOS would work well enough). DosBox may work as well, depending on specific needs.


Regarding compatibility.. didn't they try to break away with Windows N? Though a commercial flop, only available on ARM and locked down devices... I hd pretty high hopes in that space, higher that they'd take Server in that direction...

I'm not sure MS can really dead-end a legacy codebase so easily. They seem to be trying, but hitting resistance at every turn.


I agree. My main keyboard is a Microsoft Internet Keyboard that I have been using for many hours daily since 1999, surviving five different computers (all running Linux). It's the most comfortable keyboard I've ever used, and it shows no signs of wear other than having some stains, plus partial fading of a few of the key symbols, and the Ctrl and C keys are beginning to show dents where my nails have been hitting them.


There are no good standard rubber dome keyboards. Try using a keyboard with real switches or good rubber dome like buckling springs or Topre switches and you'll know what I mean.


I disagree. Every Microsoft Natural keyboard I've used that's more than a few years old has gotten incredibly mushy. I make an exception for the original line of Natural Keyboards in the mid-1990s that had a firmer membrane, but the current range, including the "Ergonomic Keyboard 4000" just feels like I'm typing on kitchen sponges.


Sad to hear that. I like the 4000, in fact I am typing on one right now.

I vaguely remember Fujitsu had an ergonomic keyboard (one could even adjust the angle between the two halves), but I have never laid hands on it, so I cannot say anything about the quality. All Fujitsu notebooks I have touched had absolutely crappy keyboards, but that might not mean much.


I've never had a 4000 that survived more than a couple of years. Inevitably one of the keys gives out, and then it's off to Office Depot to buy another one.

The main thing it's got going for it is the overall layout, which is fantastic. If someone made a keyboard using mechanical switches with the same layout I'd snap it up in a hot second. But for reasons unknown, while keyboards with mechanical switches have proliferated in recent years, none of them incorporate 4000-style ergonomic features like split keys or a curved key bed.


The driver for my Microsoft Designer Mouse (one of their latest models) stopped working when I updated windows the other day, no word on when a fix will occur.


Old 1990s hardware from them was awesome. IntelliMouse, Joysticks, keyboards, etc very good devices. Even the Comfort Mouse 6000 was still great (though the die too fast, and have a hardware bug - middle click is delayed)

But nowadays it's all cheap plastic and cheap mechanics, made to last 2 years.

There are alternatives for good keyboards, but I have yet to find a good mouse. (a business mouse, not a rainbow colored funky gamer mouse)


Microsoft hardware in general has always been very good IME. Their mice were the best around for decades, and I'm loving my Surface Book.


Keep in mind Surface Books don't have global support - I bought mind in at a Microsoft Store in the US and Microsoft UK just won't look at it.


comfort curve 2000/3000 are one of my fav to work on.

comfortable without the annoyances of going full mechanical.

also their optical mouse is quite good.

but the real crown jewel was the xbox steering wheel (the real one not the U stick): solid, precise, feature packed and sold at an incredible price.


Umm..

https://en.wikipedia.org/wiki/HP_Superdome

https://en.wikipedia.org/wiki/HP-UX

and HP is usually one of the best rather than worst here, though definitely more so in the pre-carly days.

though i get the general sentiment, esp when applied to consumer products..


Apple was not bad at both some time ago. Not anymore though.


The good period was Apple hardware and NeXT software.


iOS quality dropped jsut after the days of "everything must be skeuomorphic!" - that ux was thrown out quickly (luckily) but the replacement has been hardly stable and quite poorly thought from the inception, and some stuff is still giving me constant annoyance even today - like the time app not opening on the last used tab


It depends if you use the control center shortcut or not.


Apple?


iTunes


Apple?


Apple Music and iTunes? Both horrible.


Apple, I believe, is the wildly successful exception.


NVidia seems to produce excellent software (drivers, tools and SDK) for their excellent hardware.

SolarFlare network cards are also pretty good and so are their drivers and user space network stack.

In both cases the quality of the software side is a significant drive for their hardware sales.

In both cases the hardware is very expensive though.


NVidia seems to produce excellent software (drivers, tools and SDK) for their excellent hardware.

Here is a different perspective: https://googleprojectzero.blogspot.com/2017/02/attacking-win...

And "excellent hardware" is probably rather subjective as well.


Excellent software does not mean bug free of course. Apparently the size of the code base of the driver itself is comparable to that of a typical OS kernel.

Their hardware is pretty much best in class in both absolute performance and performance per watt, so I would say that it is a pretty objective evaluation.


> Their hardware is pretty much best in class in both absolute performance and performance per watt, so I would say that it is a pretty objective evaluation.

There are many more criteria:

- Openness: Open specification, existence of open source drivers, necessity for a firmware blob vs. open firmware, support of open standards

- Time the hardware is supported by producers with new drivers for new platforms (NVidia loves to cease support for older GPU chips).

- cost and cost-benefit ratio (the whole Intel vs. AMD vs. ARM flamewar ;-) )

- Existence of intentionally blocked features: Some hardware vendors intentionally block features, sometimes even with the possibility to unlock them afterwards (e.g. by firmware or fuses) if you pay them additional money

- Willingness to go to legal limbo in the interest of the users: For example when copy protection schemes for CDs were common in the early 00 years, producers of CD/DVD writers tried to surpass each other with capabilities of their devices to be still able to read out such copy-protected CDs. A modern example might be how other copy protection schemes for, say, audio or video are implemented: In software and written in such a bad way that it will (hopefully) soon be cracked or deeply dongled down in a security chip that is part of the hardware.


> (NVidia loves to cease support for older GPU chips).

As someone who deals with old GPU support daily, I challenge this! The most recent drivers still support Fermi chips (GT400 series) that were released over 7 years ago.

I'm not sure how long you expect a chip to be supported?


I have a pile of nvidia 5x0 cards that are now unusable under linux.

The rest of the hardware I bought that year still works fine (even stuff that's more obsolete than the nvidia cards).

AMD has usable (= works with modern Steam games, and suspend/resume) open source drivers, so I bet on them for the new video card. I guess we'll see how it worked out in 2022, or so.


Are you talking about the geforce 500 series[0]? They are still supported. Assuming the HW is still good, all drivers should still work. You can try with the NVIDIA driver included with the distribution (Ubuntu, Mint, etc), or download the one from nvidia.com

If it doesn't work, you can file a bug (see/use nvidia-bug-report.sh).

[0]: https://en.wikipedia.org/wiki/GeForce_500_series


They "work", just not with modern software. I found threads where the nvidia devs and the open source devs discussed the issue. Apparently, the nvidia drivers violate the opengl spec and drop textures at unpredictable times (such as during suspend and mode switches).

The "fix" is to constantly spam the same texture at the card, or use some some non-standard opengl extension to see when the driver decided to drop the textures, then recopy them from dram to the card (why keep one copy in video ram when you can keep a second in dram for twice the cost?)

From what I can tell, people are sick of implementing this workaround, so newer software doesn't bother. This has been the status quo for over a year.

In practice, this means severe screen corruption when switching users or suspending/resuming.


I've never seen that particular bug before, but the whole thing sounds totally plausible. Since it only manifests in Fermi, it's likely an arch issue. I'm guessing the workaround in the driver was deemed to messy/expensive compared to the one in the application.

I'll take a cursory glance at the issue, but if you've seen NV devs discuss it externally, it's probably well known internally, like closed as WNF.

I totally see where all sides are coming from, though:

* NV: Why spend resources fixing old architectures when it's easy to workaround at application level.

* Devs: Why spend resources supporting some obscure old HW that doesn't work right.

* Users: Why spend money replacing perfectly good HW, because NV/devs are lazy?

The more time passes, the less likely the first two are to do anything about it; and in time the number of users with those cards drops below epsilon. The easiest option is to just get HW that works with the SW you need - like you did.


I hope you are right, but there is not enough transparency to tell. From the discussion it sounded like modern hardware will behave the same, and maybe they'll try to amend the standard.

I didn't feel like betting $100's to find out if it hits on new hardware, but you can readily reproduce it on ElementaryOS with the 500's. You have to manually install the binary drivers with the ubuntu/debian proprietary driver tool, since ElementaryOS recommends the opensource ones.

This isn't the only distro that hit it, just the one I landed on in the end.

Anyway, for me the easiest option was to switch to open source drivers (where this kind of thing has a better track record of being fixed), and Nvidia is a non-starter there.


> As someone who deals with old GPU support daily, I challenge this! The most recent drivers still support Fermi chips (GT400 series) that were released over 7 years ago.

To deliver a source:

> http://www.tomshardware.com/news/nvidia-eol-graphics-card,26...

My opinion on how long a device should be supported: As long as there is no open specification available, one is dependent on the vendor to deliver updates. And graphics drivers are very prone to security bugs. So as long as devices still have a not tiny user base, the vendor has to provide security fixes for it. I would even love to say we should raise our standards and demand that a producer has to support their hardware up to the moment they release open specifications for it.


So you're talking about Tesla and earlier chips. Those are indeed only supported for the security fixes. The 34x.xx driver supports those chips, and they are indeed getting security updates.

I have personally fixed a couple of these issues[0], including for those "EOL'd" cards. The most recent posted drivers for these chips I'm seeing are 342.01 from 2016-12-14 for Windows and 340.102 from 2017-02-14 for Linux. That seems like it would check the "provide security fixes" box, no?

>has to support their hardware up to the moment they release open specifications for it.

I'd tend to agree, but as you can imagine this is quite a complex issue for us, so no comments here :)

[0]: http://www.nvidia.com/object/product-security.html


Nvidia are the people that require you to log in to get driver updates. Their "geforce experience" is mostly horrible, apart from Ansel and maybe the streaming thing.


I'm not a windows user, so I wouldn't know (the nvidia drivers from my os are an apt install away), but AFAIR the 'geforce experience' is not required to get the lastest version of the drivers, only for non essential tools.


That may explain your bias then.

Nvidia provides decent official Linux drivers, whereas manufacturers usually provide none.

On windows, all cards have decent drivers. Nvidia is not particularly remarkable.


With regards to your thoughts about the hardware being expensive, I found the following [1] article interesting. I'm mostly referring to the vendor lock-in part with CUDA.

[1]: https://streamcomputing.eu/blog/2014-08-05/7-things-nvidia-d...


Nvidia's software is bloated, requires login for features, telemetry without notifying the user and security issues in their drivers. I wouldn't really use them as a prime example.


> Actually, the purpose of the software is to recognize whether a special key has been pressed or released. Instead, however, the developer has introduced a number of diagnostic and debugging features to ensure that all keystrokes are either broadcasted through a debugging interface or written to a log file in a public directory on the hard-drive.

Looks like it's not intentional. Although really poor code-quality process I would say.


> Looks like it's not intentional. Although really poor code-quality process I would say.

To quote from

> https://en.wikipedia.org/w/index.php?title=Underhanded_C_Con...

(emphasis by mine): "The Underhanded C Contest is a programming contest to turn out code that is malicious, but passes a rigorous inspection, and looks like an honest mistake."

Do you really believe that Malory does not use practices that make the security hole look like a mistake of a not-so-experienced programmer or an internal debugging tool that was accidentally left in?


That's why I wrote "Looks like" in first place.

A little paranoia here would actually be helpful. Anyway my thought process is :

If it's really intentional, they would not dump the keystrokes to a text file ( maybe some `.dat` shit ) and will not truncate it on every log in.

On the other hand. It seems quite stupid for anyone to not foresee what happens when the code is being executed on a scale.

If I'm a big corp, I would actually demand ( via court ) from HP/Conexant to open source the driver itself, since any investigation will need access to it anyway.


Sorry, what do you mean by "Malory" here? trying to google I only get references to Malory Archer from Archer.

EDIT: Think I found it. The new M's name is Gareth Mallory[0]

[0] http://jamesbond.wikia.com/wiki/M_(Ralph_Fiennes)


> Sorry, what do you mean by "Malory" here? trying to google I only get references to Malory Archer from Archer.

> https://en.wikipedia.org/wiki/Alice_and_Bob

"Mallory or (less commonly) Mallet: A malicious attacker."

Sorry I forgot one "l" in "Mal[l]ory".


Thanks! I was only aware of Alice, Bob and Eve, good to know there's a whole cast of characters.


> Looks like it's not intentional. Although really poor code-quality process I would say.

OTOH, pretending incompetence has always been a pretty successful defence. This is a class of error that I won't easily attribute to incompetence.


I'm strangely not surprised with HP and their actions (in this case, a lack there of). It reminds me of the Bose issue a year or so back with their products.

And the impact in which HP is going to experience - is nothing. Most people still to this day really don't care/understand on why this is a problem. They just want to get a computer for school, General internet surfing or watch cat videos. (Cat and Dog videos are quite interesting.)


HP got more flak for their "racist webcam" than this issue ever will, I suspect.

(2009: http://gizmodo.com/5431190/hp-face-tracking-webcams-dont-rec... )


I remember in the late 90's early 2000's when HP was embracing linux and open source... and then they merged with Compaq and I've seen nothing but mistake after mistake from them since.

I'm really tired of seeing companies positioned to make good things and better the world get focused on quarter profits and short term thinking, because it always bites them in the ass eventually.

Mismanagement from the C level up abounds.


I archived the HP page just in case: https://archive.fo/FjWUv


> ...or it could just monitor the keystrokes itself with SetWindowsHookEx() like this process.

...which any AV will immediately flag. This allows malware to keylog in a much less detectable way by piggybacking off trusted HP software


This is one of the main reasons for libre/free/open/choose_your_term software.

Even when malice is not to be checked for, genuine error, incompetence, forgetfulness or plain indifference must be checked for.


As much as I love Free Software, there have been plenty of examples of bugs (security-relevant and otherwise) in FLOSS code that were just as problematic.

Source code being open for inspection only helps if people actually take the time to look at the code. OpenBSD deserves an honorable mention.


Proprietary software developers have little incentive aside from the possibility of discovery to not take advantage of users. It's sometimes difficult to determine whether it's malicious or not, and there's a greater chance for plausible deniable---we don't know the story behind that code.

If an anti-feature is discovered in free software, it can be promptly removed and replaced, regardless of whether the developers of the software consider it to be an anti-feature. What if HP wants this to remain for debugging/diagnostic purposes? There's little you can do about that. (I'm not saying they do.)

The fact that free software can have bugs is an open source argument that falls apart when the argument becomes "open source is better because of technical advantage X".[0]

There are frequent arguments that the freedom of drivers isn't important---mainly because it's inconvenient to use a system without non-free drivers and requires purchasing replacement hardware to work around. This is an example that maybe can help counter that point.

[0]: https://www.gnu.org/philosophy/open-source-misses-the-point....


I do agree - wholeheartedly - with you that Free Software gives users control over what their devices do. This is important. Control and trust that your devices do what they are supposed to do - no more, no less - are pretty much impossible to establish without free software

My point was that if nobody bothers to look at the code, the bug will go undetected. Think of how long the Heartbleed bug had been in OpenSSL before it was discovered.


> My point was that if nobody bothers to look at the code, the bug will go undetected. Think of how long the Heartbleed bug had been in OpenSSL before it was discovered.

Yes, I agree. "Linus's Law", while it has some truth, is a flawed (and open source) reasoning if considered absolute.


Furthermore, security bugs are a special case, because of the motivation for the bad guys to find them first.

One might argue that the rest of us have just as much motivation, but you know how that works in practice.


Okay... I'm not sure what your point is.

Yes, there can be issues with open source software, but unlike closed software it can actually be fixed.


Is this an old article? Conexant was acquired by Philips a while back.


Wow. That's going to hit HP


Unlikely. Normal consumers simply don't care, unfortunately.


What is a caring consumer to do? At this point, pretty much all laptop brands are on my blacklist for one such reason or another.


I have not been following all of the incidents that happened in the past. Do you mind sharing your list ?


Get a Novena or something certified by the FSF? You're limited to slow hardware though.


Hope really hard that 3d-printed electronics become a thing.


Yeah, that's even worse than last years Lenovo fiasco.


I personally wouldn't compare these two incidents but just wanted to remind you that the Lenovo incident was malicious by design. This one can, and most likely will, be attributed to carelessness.


The article suggests that as well, the way I read it:

"Actually, the purpose of the software is to recognize whether a special key has been pressed or released. Instead, however, the developer has introduced a number of diagnostic and debugging features to ensure that all keystrokes are either broadcasted through a debugging interface or written to a log file in a public directory on the hard-drive.

This type of debugging turns the audio driver effectively into a keylogging spyware."

Carelessness sounds like a fairly reasonable explanation, simply applying Hanlon's razor. :)


> Carelessness sounds like a fairly reasonable explanation, simply applying Hanlon's razor. :)

On the other hand: If you do believe that there exist software on most computers where a security hole has deliberately left in (and since Snowden you should), applying Occam's razor will tell you that it probably looks like "innocent incompetence", since considering the typical software quality this gives rather plausible deniability.


To be fair, logging keystrokes to a debug log sounds like something I might have done if I had to write and/or debug such a piece of code.

Then again, I probably would have wrapped that code in an #ifdef so it is only present in debug builds.

Come to think of it, I have done something like this, except I only logged keystrokes the application received directly, and I did wrap it in an #ifdef, although my motive was more along the lines of preventing the debug log from filling up the customers' hard drives. ;-)


Yes, and the results will likely be the same - i.e. nothing whatsoever.


"Neither HP Inc. nor Conexant Systems Inc. have responded to any contact requests. Only HP Enterprise (HPE) refused any responsibility, and sought contacts at HP Inc. through internal channels."

A keylogger and this is their response?

I hope they get the shit sued out of them.


MicBleed


googling "conexant keylogger" shows this is not a new problem.


Hmm, Conexant. I seem to recall battling their products back in the modem days...


This is fucked up world we live in !


To fix the super-wide article:

    document.querySelector('.blogbody').setAttribute("style", "max-width:650px; margin: 0px auto;");


Thanks, you will go far in life.


Please, use a max-width on text columns. The article is unreadable on a large screen.


I tend to hate max-width on sites because I'm left with a giant white page that's only using a few % of the space for text.


But design is not only about looking good. It may look nice but with terrible UX.


Only if you have your browser window maximised or very large, which kind of defeats the purpose of a windowed multitasking OS.


You know, you can have more than one screen. And working wth visual/design/UI tools you pretty much want to use as much of the screen as possible. Most of the time I have a browser fullscreen on one of my screens.


> working wth visual/design/UI tools you pretty much want to use as much of the screen as possible.

Really? You maximise your content window ? Then where do you leave all your tool-windows ? On a second monitor ? Do you only design widescreen content ? Don't you think those white bars on the left/right of your content is a total waste of screen real estate ?


This is a losing battle. For people who have been reflexively maximising everything since they were using Win95 on little goldfish bowl CRTs, they simply won't stop doing it, no matter how ridiculous it is on big/wide screens.


I've been using full screen for some windows since my xterm days using 21" Sun Sony Trinitron monitors and I don't plan to stop now.


As long as it's not reflexive for virtually every application, you're golden.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: