Advice when starting a tech job

Having recently started a new job (a not-uncommon occurrence among tech workers), I adopted a new strategy about signing up for the many services needed to do productive work as a designer or developer. I’d advise the same strategy for anyone using a work computer for most or all of your work, such as a laptop you take home regularly (though not necessarily so much if you regularly work from your own personal computer).

Instead of trying to remember what my previous accounts were, and telling them all to the company so they could invite me to the company groups, I created all-new accounts, all using my work email address, even if I already had an account with that service. (Technically I missed one where I’m using an existing personal account, but in that case I wish I had created a new one.) Where they asked for a non-email username, I used the same pattern the company already used internally, where possible. When it comes to mailing addresses, I always put the office address too.

Simple. It makes it clear who “owns” those accounts – the company I work for. When they add me, they always just add me by my work email, they never have to ask what my account is. If I ever leave, they have all the information about those accounts in my inbox (which they probably have access to). They could reset the passwords to log into one of the accounts and review any activity there, or delete one when it’s no longer needed. (For the odd account using two-factor authentication, I’d probably reset the phone number for any of those to my boss’s phone when I left.)

This helps make a clear separation between work and personal, and between current job and past jobs. When I leave this company, I won’t get any notifications about things which are no longer relevant to me (or any of my business), but the company will if they feel like checking the email box I was using. It’s also a reminder that when using my work-owned laptop, it’s for work, and I only ever log into work-related accounts and services on it.

Of course this may not apply so much if you’re a freelancer or contractor, or do all your work on your own computer.

The one exception, the area you must still always use your personal contact information, is HR and payroll. Anything to do with your paycheck, health insurance, retirement accounts, or other benefits, should always go to your personal email and be sent to your home address. If you ever leave the company, you want to be notified about your last paycheck, your insurance status and COBRA continuation, and, hopefully for many years to come, your retirement account, pension, stock options, etc.

But otherwise, keeping work and personal separated is the way to go, in my opinion. Think of all of those accounts, along with the laptop, as belonging to the company, and you’re just given access to use them while you work there.

Advertisements
Tagged ,

Steps browser vendors could take to increase privacy

  1. Check password strength by checking the contents of “password” fields before being submitted:
    1. Length. Warn users trying to submit a short password that it’s short enough to be brute-forced.
    2. Entropy. Possibly encourage users to use a combination of upper and lowercase letters, numbers, and symbols.
    3. Dictionary. This is something that browsers are in a unique position to check for by including a “common passwords” dictionary with the browser. Check against a dictionary of the top, say, 10,000 – 100,000 most commonly-used passwords (see https://xato.net/passwords/more-top-worst-passwords/), and warn the user that crackers will be checking these first, since the top 10,000 passwords are used by 99.8% of users’ accounts.
  2. Include a client-side password hashing mechanism, which could hash a combination of the password, username, and sitename with something like bcrypt. This would require some controls to limit the length and allowable characters to make the resulting password hashes compatible with various sites. If this were an industry-standard hashing function used by various browsers, they could all use the same rules, making password hashes portable across browsers and platforms.
  3. Anonymize the user agent in private browsing mode by not reporting fonts, plugins, or perhaps even the operating system in use, and only a fairly generic version for the browser.
  4. Enable “opportunistic” encryption in private browsing mode. This would be different from HTTPS Everywhere (though you might want to enable that, too). In this case, if the page was requested over regular http, test for the presence of https, and if port 443 is open and there’s a certificate installed, use it to communicate with the server (without asking or informing the user). If there are certificate problems, don’t report to the user that you’re using an https connection, but do use the “untrusted” encrypted connection since it’s likely more secure (certainly from passive listening) than regular http. Similarly, if any “mixed” content cannot be loaded securely (even after testing for an https connection, for instance from a different server which does not have https enabled), load it anyway. The goal here isn’t to ensure the connection is 100% encrypted, trusted, or protected from MITM (man-in-the-middle) attack, it’s just to opportunistically make use of as much encryption as possible where it is available. Just make sure not to mislead the user about how secure the connection is.
  5. Encourage the use of VPN services (including TOR) while in private browsing mode. Preferably have a list of “vetted” VPNs and an easy way to get set up using them.

Steps that need to be taken to reign in the NSA, et al

  1. Remove the legal authority that allows bulk collection, excessive secrecy, and other abuses. The FREEDOM act and other pieces of legislation proposed so far are tentative, toe-in-the-water, first steps in this direction, nibbling around the edges of the problems. They need to go much further.
  2. Enact harsh, explicit penalties for noncompliance. These are serious issues with major implications domestically and internationally, including causing diplomatic difficulties and negatively impacting markets for American products, not to mention serious violations of privacy and liberty.
  3. Prosecute for past violations of existing laws, as well as for perjury before Congress. (And absolutely do NOT again grant retroactive immunity…) Systematically and repeatedly violating both the will of Congress and the limits imposed by the FISA Court cannot be allowed to pass leniently; nor can repeatedly lying under oath to Congress. Major criminal investigations need to be launched and the perpetrators brought to justice. A GREAT DEAL OF INFORMATION WILL BE BROUGHT TO LIGHT BY DISCOVERY IN THE PROCESS–AS IT SHOULD BE.
  4. Courts should stop deferring to the executive branch in matters of what needs to be kept secret. While they should carefully consider the Government’s arguments about the need for secrecy, I think they have, in general, been far to sympathetic to the Government’s point of view, to the detriment of the public’s right to know important information. In many cases the answer is simple: refuse to make the proceedings confidential, and give the government the choice: if you want to present your evidence, it must be in public. If you want to keep it private, you can’t introduce it as evidence to support your case. Done.
  5. Since bulk collection will no longer be allowed, and that constitutes much of the NSA’s activity, the NSA can no longer justify a large proportion of its budget. NSA’s budget must be cut by at least 50%. (I’d propose a 90% cutback, and lower, 15-25%, cutbacks for the FBI, CIA, DEA, TSA, military, and other agencies.) This would also remove the NSA’s financial ability to perpetrate abuses on such a large scale, forcing them to focus their activities appropriately. It would also punish financially both the agency and their contractors for their corruption and past (and current) abuses.
  6. Enact general privacy laws:
    1. The “third-party doctrine” must be completely eliminated. Virtually everything we do in modern life requires the use of a third party to accomplish. It is ridiculous to think that having handed our information to ONE party (who could be seen as “a member of the public”) amounts to handing that information to “every member of the public as a whole” (or to the government). There are restrictions in place on a few very specific areas of third-party data usage (physical mail, telephones, health care, client-attorney privilege, etc.), but to assume that the lack of such specific coverage for other types of communication or information storage implies they should provide no privacy at all is ridiculous. Even posting to a Facebook wall is not “public”, because many, if not most, people set their postings to be seen by “friends only”. Thus it is not true that even those Facebook postings are in any way “public” – let alone private messages sent to a particular user.
    2. It needs to be made illegal for companies to share anyone’s information with any other party except in specific circumstances:
      1. If the customer has given their express written consent. This consent should need to include a listing of all information fields that will potentially be shared (ie, first name, last name, address, birthdate, social security number, online status, friends list, etc.) as well as the exact party or parties it will potentially be shared with (meaning, the actual company names). If new information fields are supposed to be shared with that service provider, each user must provide consent again before it can be shared, since they only consented to the previous list of fields to share. The same would be true if a new service provider would be used.
      2. If the company is legally compelled to via a court order specifying by name (or username) the customer in question and the information that they are seeking. By law, the company may only provide the specified information (ie, only the particular fields requested) and only provide information regarding the named customer.
      3. If the company suspects or has observed a crime, they can inform the authorities with only the information sufficient to allow the authorities to determine a crime may have been committed, and an associated username so that appropriate warrants may be generated targeting that user’s account in order to gain access to the further information, such as the actual evidence linked to that user or the user’s personally-identifying information. Only if the authorities return with a valid court-ordered warrant may they gain access to the user’s account.
    3. Any company making use of a person’s data must be required to take all reasonable steps to safeguard that data, such as using encrypted communications and data storage and access controls both internally and between the company and the user. Standards similar to PCI should be required of ANY service handling customer information, even if non-financial.

Securing Passwords

There are many aspects of web security, but one that seems to need particular attention is passwords. Passwords are perhaps the single the most sensitive piece of information that users should keep private online, even more than personal information such as birthdate, social security number, credit card number, bank account number, mother’s maiden name, or favorite childhood pet, since almost all of those could potentially be researched or else known by at least friends or family members, not to mention that, if any of those are stored by a website, whoever has the password can go in and access them anyway. And since users should go to extreme lengths to keep passwords private, so should websites.

What’s so special about passwords?

Why are passwords so important? Because not only are they the keys to the kingdom of the site they are meant for, giving unfettered access to everything in that user’s account (including all of that other sensitive information), but us being humans struggling to remember the passwords for probably dozens of different sites or services, we tend to re-use the same passwords in more than one place, and usually in combination with the same email or username, too. This means that compromising a user’s password on one site means potentially compromising all of their online activity and accounts, which may give access to virtually all of their personal information and communications.

While it’s easy to say “use a complicated long passphrase that is unique for each site”, in practice we human beings struggle with such a challenge. And while it’s equally easy to say “use a password manager”, the reality is we often have to log in from different devices, and most people don’t use password managers even on their primary device. So that’s where web developers come in, is effectively making passwords unique while maintaining other best practices in their handling.

Recommendations

A lot of these recommendations are basic, but I still run into sites that obviously store passwords as plaintext – they’ll even helpfully email your password to you if you forget it! Haven’t we all heard enough times about site break-ins that exposed the entire password database? It’s happened to countless sites large and small. In combination with the extreme speed of password hash cracking these days, that’s bad news. And of course the many sites using regular unencrypted http connections during login, exposing your password to anyone who happens to be sniffing the network connection somewhere along the line, which is easily done on many WiFi connections.

So here are some rules for handling passwords:

  1. Never transmit a password over an unencrypted connection. If your site allows user sign-ups, even for “casual” accounts with little sensitive personal information, you MUST get a signed SSL certificate (maybe $50 per year) for your site, and not only enable SSL, but also at least strongly encourage, or preferably require, the use of https in order to log in.
  2. Client-side hash your passwords using Javascript. This is actually quite an unusual step to take, largely because it doesn’t do anything to improve the security of your site itself. But what it does do is helps protect the security of the password a user types into the browser with their keyboard. Yes, someone intercepting the password in transit could use it in a “replay” attack to access that particular site, but they could not easily use it to log into another site as the same user.
    • Of course you shouldn’t just hash the password alone, it should be hashed with the unique username (to distinguish it from anyone else using the same password on that site) and a string unique to the site itself (to distinguish it from the same username/password combination on a different site).
    • You need to enable an identical backup hashing capability on the server in case Javascript is disabled in the browser, but for most users while setting their password or logging in, the server (not to mention the network connecting to the server) should never know what password the user actually typed into it.
    • Yes, you still need to hash again on the server (see next), and yes the connection should be encrypted already (see previous), but this is a “defense in depth” measure.
    • Use the best (hardest to crack) hashing function available, but CPU and memory limits, particularly on mobile devices, might preclude using the same method as is used for the normal server-side hashing mentioned next. (Note: since few if any websites use client-side hashing to help protect the uniqueness of user passwords across sites, users themselves can use a tool like this secure password generator, explained here. This code might be a good starting point for client-side password hashing as well.)
  3. Hash received passwords on the server using scrypt before storage or comparison to a stored password verifier. For this, do not use MD5, do not use SHA-1, don’t even use SHA-256 or SHA-512 (though I guess you could use one of the latter with enough rounds of hashing, if you must). Use scrypt, or at worst bcrypt or pbkdf2 (though either of those can be cracked thousands of times faster than scrypt). Bcrypt is designed to run inefficiently even when run across a large number of CPUs; scrypt has the added property of being memory inefficient to parallelize as well as being CPU inefficient.
  4. Keep the password file as secure as possible. Make sure it’s readable and writable only by the proper users, audit access to it, and preferably even keep it on a separate server from the main web server, with physical security, limited functionality, different/limited user access, no direct access to the Internet, and only allowing connections to it from the web server and only for purposes of querying the “is this password good for this username?” service.
  5. Introduce a delay between login attempts if they make too many wrong guesses. Make this delay increase gradually the more wrong guesses they make. For instance, after five wrong guesses, add a five-second delay, then increase it to 10 seconds, then 20, etc. Don’t ever actually lock them out of their account, just make them wait longer periods of time before being able to guess again. (Of course after that time expires without another attempt, roll back the delay until eventually they would have no delay again.)
  6. Encourage strong passwords and forbid especially weak ones. A “strength meter” (color coded and/or numerical/length) can encourage stronger passwords (longer and with more entropy), but there should also be a minimum requirement to accept a password. I’d suggest the following as a minimum:
    1. Require at least eight characters, but strongly encourage more (and allow a large number, perhaps as many as 256 characters).
    2. Require the password contain at least two types of characters (from lowercase letters, uppercase letters, numbers, or symbols), but strongly encourage at least three and preferably all four types of characters be present, as well as more than one of each.
    3. Load a list of the most common 1000 to 10,000 passwords into the browser in Javascript, memory permitting (not on the server side), and compare the typed password against the list to warn the user it’s too easily found with a dictionary attack, perhaps saying “that is the 37th most common password known to password crackers, you might want to choose something more original”.
  7. Institute a solid password recovery process. I think I’ll save this for a separate post, as it can be pretty involved.

Handling security isn’t necessarily easy, but not trying to do so is a grave disservice to a site’s users, even if the site involves little or no personal information, no commerce of any kind, etc. And properly handling passwords is one of the keystones of good security for sites with user logins.

Tagged ,

Opportunistic Encryption and How Browsers Handle Certificate Problems

As a follow-up to my last post on improving privacy on the Internet, I ran across the concept of opportunistic encryption, which I’ve heard about before but has never seemed to go anywhere.

Opportunistic encryption seems most interesting at the TCP layer, so that it is transparent to not only the user, but to applications that use the network as well. However, there are technical challenges to successfully implementing it without introducing undue complexity or noticeable reductions in performance. Such schemes have also never been accepted by a standards body, so their chance of widespread adoption seems slim (though you can try one such scheme, TCPCrypt, already; however, it requires the other end of your communication to have TCPCrypt installed as well, which seems unlikely in most cases).

Thus, as I noted in the last post, web and email seem to offer the best opportunities for adding encryption that’s transparent to the user.

How web browsers handle encryption problems

This leads us to https, the security and privacy protocol for web browsing. As I said previously, we’d like to encourage as many web servers to support, and preferably even mandate, the use of SSL/TLS for web browsing. And the web developers, systems administrators, and internet engineers our there can certainly help make that happen.

But there are lots of things to get right when implementing web security. Getting them wrong can make you susceptible to various kinds of attacks, mostly based on some form of of man-in-the-middle. That’s why browsers go to such lengths to warn users about problems, often denying access to the site if a problem is detected, until the user explicitly overrides this warning.

But is this the right behavior to take? Is badly-configured encryption really worse than no encryption at all? Web browser vendors sure seem to think so, but I disagree. While a misconfiguration such as a mismatch between the domain named in the certificate and the actual hostname may be a sign of a man-in-the-middle attack, in my experience it’s almost always due to something else. Similarly, self-signed or expired certificates are extremely unlikely to indicate a man-in-the-middle attack. And while none of these situations is ideal, they are all almost always far better than having no encryption at all.

Undesired behavior

So what actually happens when a server has a misconfigured certificate, and the browser throws up a big warning? Either the user can ignore the warning (which is potentially dangerous, but actually fine 99% or more of the time), they can switch to insecure http (which is, at best, the same as continuing with the untrusted encryption, but much worse the vast majority of the time), or they can discontinue using the site entirely, which hurts both them and the business, and is usually unnecessary since the chances of it being an actual man-in-the-middle attack are slim.

When the operator of the site sees the problem, they may choose to fix it – but they might just choose to disable https instead (and aside from e-commerce sites, I’d suspect the latter is more likely, at least in the short term). Yes, they should fix it, but more often than not they are not going to.

The net result of these browser warnings is scaring and confusing users without increasing their security, since between the users and the website owners, the most likely course of action is to either ignore the warning and proceed (which browser vendors have combatted with ever more dire and difficult to bypass warnings), or to revert to the even-worse unsecured http.

False sense of security

But at least from the point of view of opportunistic encryption, encryption using an expired, weak, self-signed certificate is vastly preferable to no encryption at all. The only danger is providing a false sense of security. But browser vendors have done exactly that by turning everything on its head, by making totally unsecured connections seem preferable to many sorts of encrypted connections, since the unsecured connections do not throw up warnings in the browser!

We need to encourage the use of https connections on the Internet, and part of encouraging its use means not discouraging it where the implementation is not perfect. While we should encourage proper implementations most of all, we should also encourage opportunistic encryption as better than no encryption, even if we aren’t guaranteeing privacy or integrity in the face of man-in-the-middle attacks (which take some effort and are quite rare in the grand scheme of things).

How to fix this?

The fix should actually be simple: change how web browsers communicate to users problems with how encryption is implemented. And most of all, how that communication compares to how it handles totally unencrypted connections.

I propose a “sliding scale” of perceived security. In the browser bar, the scale could be represented by a range of colors and icons, as follows:

  • UNENCRYPTED: Non-https connections would always be highlighted in red. Use of “null” encryption ciphers would also put a connection in this category. In addition, I’d suggest a “bullhorn” or similar icon to communicate that you are broadcasting your activity to the world (a typical radio broadcast icon could work too, but could be confused with wifi). When clicking for more detail, it could warn the user as follows:
    • THE BAD:
      • Your connection is unencrypted. Anyone on the Internet could listen in and see what you’re doing, including viewing your password if you are logging in, could modify or replace the content sent between you and the server without your knowledge, or could be logged in as you and have full access to your account.
  • INSECURE ENCRYPTION: This would be used for various kinds of encryption which have problems that could leave them susceptible to or be a sign of a man-in-the-middle attack, such as self-signed certificates, revoked or long-since expired certificates, or certificates for a domain which does not match the hostname, but where the encryption is still useful for opportunistic encryption and protecting from casual observers. Use of particularly insecure types of encryption (weak or compromised ciphers such as “export” ciphers, too-short key length, etc.) could also contribute to showing up in this category. These should be signified by a broken or unlocked lock icon. Clicking for more detail could notify the user as follows:
    • THE BAD:
      • The certificate used by this site is [unsigned/signed for a domain that does not match the actual hostname/expired/revoked], and thus does not guarantee protection from a man-in-the-middle attack. (Along with more detail, such as a comparison of the domain name for the certificate with the actual host name, the date the certificate expired or was revoked, and a note that certificates could be revoked due to knowledge that the encryption keys have been stolen or misused.)
      • (possibly) The encryption in use is considered weak enough to be easily cracked in a reasonable time by “brute force” methods.
    • THE GOOD:
      • Your connection is encrypted, so your activities cannot be viewed by casual observers monitoring traffic on the Internet.
      • Man-in-the-middle attacks take some effort to mount and are fairly rare, so most likely your connection is secure and the warning is due to a much more mundane misconfiguration; however, there is no way to guarantee it.
  • SEMI-SECURE ENCRYPTION: This might have some kind of closed or almost-closed (maybe closed, but with a crack) lock icon. It would be a variant of the above, but where the “misconfigurations” were considered more minor, such as:
    • Signed for a subdomain that doesn’t match the hostname exactly, but shares the same overall domain name. For instance, a certificate signed for “users.mysite.com” would be considered semi-safe if used on “www.mysite.com” (or any other *.mysite.com), even though it’s not an exact match.
    • Recently expired, for instance within the last 90 days.
    • Encryption that may have some weaknesses, but is considered secure against anyone short of the NSA, and probably not super easy for even the NSA to crack in a reasonable time and on a wide scale.
  • SECURE CONNECTION: This would be used for connections that are considered fully secure: a properly signed (by a trusted certificate authority), unexpired and unrevoked certificate which matches the hostname. The connection should also be using the strongest cipher suites available. These would have a closed lock icon. Clicking for more detail could notify the user as follows:
    • THE GOOD:
      • Your connection is encrypted, so your activities cannot be viewed by observers monitoring traffic on the Internet.
      • The certificate used by this site is properly signed by a certificate authority, is not expired or revoked, and matches the hostname it is signed for, protecting you from man-in-the-middle attacks.
  • Extended validation: Much is made of extended validation certificates, which verify more information about the identity of the site using the certificate, and in the case of e-commerce it may make some sense to help trust who you are giving your money to. But I think they are more a means to increase profits for the certificate vendors, and I think the visual differentiation they are given is wholly unwarranted. Even a site with an EV certificate could take your money without shipping you the product you ordered, charge more than agreed, sell your information to others, or otherwise cheat you; they could also be just as likely to allow NSA access to their private encryption key (either through cooperation or hacking). And most sites without EV certificates are probably perfectly trustworthy even if they didn’t bother to pay 10x as much to get their certificate. However, it could add a green checkmark across the lock icon and an additional benefit to the “Good” category when clicking for more detail:
    • THE GOOD:
      • Your connection is encrypted, so your activities cannot be viewed by observers monitoring traffic on the Internet
      • The certificate used by this site is properly signed by a certificate authority, is not expired or revoked, and matches the hostname it is signed for, protecting you from man-in-the-middle attacks.
      • The domain for this website has undergone extended validation of the identity of its owner.
  • Forward secrecy: Using ephemeral cipher suites to achieve “perfect forward secrecy” is also highly desirable, and such sites should be differentiated with an even more secure-looking icon (or at least sparkly/magical/happy-looking) and an additional benefit:
    • THE GOOD:
      • The encryption keys change each time you connect, so gaining the master keys will not allow an attacker to see your past or future activities.
Tagged , , ,

Three Ways Web Developers Can Improve Internet Privacy

With all the revelations about out-of-control government spying on the Internet, a great deal of attention has been paid to:

  1. Political changes, such as new laws and legal interpretations. This, of course, is at the core of the problem – what they’re doing should not be legal, or if it’s already illegal, more effort should be made to notice when it’s happening and stop it, and somebody should be getting in trouble for doing it. However, there will be a lot of resistance to this, and change will take a lot of time and likely be incomplete.
  2. “NSA-proof” privacy solutions, such as end-to-end encrypted email or chat, or using TOR to browse the web. While no solution is really “NSA-proof” in the end (especially if they target your actual computer), a lot of solutions can come reasonably close. But end users often find such solutions inconvenient to use, or may not even be aware of them. Worse, they may not feel they have anything to hide from the government or are skeptical they’d be targeted for attention; indeed, we are aware that using such tools explicitly DOES single you out for attention from three-letter agencies.

These approaches are not only laudable, but critical – they are necessary to protect against determined, focused attacks by three-letter agencies. But there are many other things that can be done to protect against casual “hoovering” of information on the Internet. Part of the problem is simply this: it’s too convenient to access most information by casual listening, because there isn’t even a pretense of privacy or security when information is transmitted without any encryption at all. This leaves a very large amount of internet traffic unencrypted for them to sift through without needing to crack or otherwise bypass any form of encryption.

But what if we made encryption the default for more traffic? While it would still be feasible for the NSA to crack or bypass much of that encryption when they really wanted to (by hacking your computer to install a key logger, for instance, or requiring a service provider to hand over your data), merely enabling encryption where it is currently missing could vastly reduce the amount of unencrypted traffic flowing through the “pipes”, meaning it would cost a lot more to sift through, while also making it more difficult to target encrypted traffic for special treatment as “suspicious activity”.

Most encryption beyond whatever happens to be enabled by default turns out to be too difficult for most users to deal with. We also can’t control what access the government has to Google, Microsoft, Yahoo, and Facebook that bypasses the https connections to their servers. But as engineers working on all the other websites and servers out there, we do have control over a lot of other things.

There is much that can be improved: security and privacy on the Internet are shockingly bad, and not just because the NSA is really good at their job (though part of their job is supposed to be strengthening our cyber-security, a task I believe they are failing at). A lot of this is caused by laziness on the part of developers, sysadmins, and internet engineers, as well as a lack of understanding, priorities, or budget from managers.

But many of these changes don’t really take that much time, and aside from that, often the only cost is that of a signed SSL certificate, available for as low as $50 per year.

While there are many security tips for how to lock down your server and network, here I will only talk about simple steps you can take to increase the “background noise” level of security and privacy of communications over the Internet. Here are some suggestions:

  1. Enable HTTPS/SSL on your web server. I’ll talk about this more below.
  2. Enable TLS for SMTP on your mail server. While it is probably not feasible to force the use of TLS at all times (many mail servers may still not support it), at least enabling it on yours increases the odds of email transfers between servers being encrypted.
  3. Disable FTP and telnet in favor of SFTP and SSH. You don’t want to be talking to your server or transferring files over non-private connections when there are secure alternatives that are just as easy to use.

These three steps, taken by the administrators of many sites around the Internet, could end up encrypting a large amount of traffic that is currently sent as plaintext.

Enable HTTPS/SSL on your web server

This is perhaps the most obvious one, as the web is probably the biggest activity people use the Internet for and whether a site is secure or not is immediately visible to users.

What does it take?

  1. Install a certificate and encryption keys. In order to protect against man-in-the-middle attacks, this should be bought from a legitimate certificate authority, rather than using a self-signed certificate. However, aside from e-commerce sites, where there’s extra value in trusting who you’re about to give your credit card number to, there’s not much benefit to so-called “Extended Validation” certificates aside from more profit for the certificate vendor.
  2. Enable port 443 on your web server, referencing the keys that were installed in step one.
  3. Make sure your web pages work properly over SSL, most particularly that they don’t include any insecure content that would trigger “mixed content” warnings in the browser. This includes CSS and JS files, images, and background images referenced from the CSS.
  4. Make your SSL as secure as it can be. This includes:
    1. Using at least 2048-bit encryption keys.
    2. Enabling “perfect forward secrecy” by enabling the needed “ephemeral” cipher suites and making their use preferential, as well as making sure TLS Session Tickets are disabled.
    3. Disabling weak cipher suites, such as anonymous, null, or export ciphers, as well as avoiding Dual_EC_DRBG, which appears to have been “back-doored” by the NSA.
    4. Protect against BEAST and CRIME attacks by upgrading to TLS 1.2, de-prioritizing vulnerable cipher suites (unfortunately there is no clear approach that works in all situations), and disabling TLS compression.
  5. Make encryption mandatory by implementing a global 301 or 302 redirect from port 80 to the same URL on port 443, and updating all your internal links to reference https.

Why the NSA Spying is Even Worse Than it Sounds

Apologists are already trying to paint the recent revelations of NSA access to data at services such as Facebook, Yahoo, Microsoft, and Google as being more innocent than they seem. I believe nothing could be further from the truth.

Backbone Taps

As disclosed years ago, the NSA already taps data passing through major Internet backbones in a number of locations. Thus, they are already able to see all the traffic passing by, and record as much of it as they want. Indeed, they are building massive billion-dollar data centers in Utah and Maryland just to store all the recorded data for future use.

Bypassing Encryption

However, the connections to Facebook, major email providers like Microsoft’s Outlook/Hotmail, Yahoo, and Gmail, as well as messages and calls made through Skype, are all encrypted. Thus, while the NSA/FBI can record all the conversations, they cannot (easily) read them.

That’s where the back-door access to these service providers comes in. This provides the NSA and FBI with the means to either get the information they want straight from the service provider, or else request the encryption key to unlock the data they already have stored.

Thus, getting past the encryption is the only reason they need this access. Otherwise they’d be recording, storing, and possibly reading or automatically analyzing, data mining, and searching through all your emails, Skype calls, Facebook messages, etc. already.

Why it Matters

As Moxie Marlinspike points out, policing used to be a lot harder; it was impractical to monitor communications, locations and other data so easily. And it is also important for a functioning democratic society for law enforcement to be so imperfect, since so many actions, particularly progressive ones (he points out marijuana use and gay marriage) are technically in violation of some law or other. Indeed, almost everyone almost certainly breaks SOME laws as part of their normal life, many of which they may not even be aware of.

As a result, the surveillance state is becoming truly scary, because being able to track and identify a wide range of potential lawbreaking (either in real time or retroactively) is no longer inhibited by cost or practicality, and barely inhibited by legal restraints (when those laws are even being adhered to, which I expect they are not half the time). We are all criminals, and now all subject to being caught doing something wrong that could be punished, should anyone in a position of power have a desire to do so.

What to Do

It’s time to fight hard against the surveillance state – laws need to change, secrecy needs to be lifted so that such activities are in full public view, and budgets need to be cut drastically so that it remains impractical for the government to spy on all of us all the time.

At the least, we should not be happy that the federal government spends billions of our taxpayer dollars to spy on us, rather than provide better healthcare or other worthy goals. Even worse that I now live in more fear of my own government than I ever have been of terrorists.

Everyone needs to communicate to their elected officials that the direction things are going, indeed the current status quo, are not acceptable, and demand to know what they intend to do to fix things. Vote them out of office if they do not try to make things better.

Soccer update: Both Hawkeye and GoalRef systems approved, and where this could lead

The Daily Mail reports that the International FA Board, governing soccer in England, Scotland, Wales, and Ireland in conjunction with FIFA, has approved both competing “goal-line” technologies – Hawkeye (a camera-based system already in use in Cricket and Tennis) and GoalRef (an electro-magnetic system relying on several magnetic strips placed in the ball); while neither is likely to be used in the Premier League this coming season, they might be used as early as December’s Club World Cup.

Separately, Adidas has an initiative with Major League Soccer called “miCoach” or “Smart Soccer”, which tracks not only the ball but also the cleats of the players.

Combining all this, it should be possible to use these automated systems to determine several things (in addition to stats like “how far did this player run during the game”):

  • Any time a goal is scored (the intended use for goal-line technology)
  • Any time the ball goes out of play over the endline or sideline (for a goal kick, corner kick, or throw-in). This is one of the main reasons to have assistant refs patrol the sidelines, and if extended to this use, would partly make their positioning obsolete.
  • Relative positioning of players from both teams and position of the ball, which would allow calculating when a player might be in an offside position. It could be made “smart” by only sounding an alarm if a player was offside when the ball was struck by a player from the same team and that, without first contacting another player, the ball subsequently comes within some close distance of that player. This is another major responsibility of assistant refs. The technology could likely perform this task more accurately and consistently than the assistant refs. (Among other things, the assistant refs often rely on the sound of the ball being kicked to determine the time the player might be offside, but that sound is delayed by the time it reaches them. They may also not be exactly in line with the offside player, and any angle could affect the call.)

So… if technology could now potentially take responsibility for all out-of-bounds calls (including goals, goal kicks, corner kicks, and throw-ins), as well as determining offside position:

  • The assistant refs wouldn’t need to stick strictly to the sidelines. They could be each assigned a half of the field to patrol, and be allowed to wander on the field as they see fit. This could allow them to stay closer to the action and also triangulate better around the players and ball, without requiring extra referees. It would also make it easier for them to confer with the referee and players in person.
  • All the referees would be able to pay less attention to whether the ball has crossed the line or a player is potentially in an offside position, and more attention to who last touched the ball as well as fouls and other illegal behavior – that is, all the things the technology can’t really do.

So the question is: has Adidas, Hawkeye, GoalRef, or anyone else experimented with this extended use? Are there any technical hurdles to using the technology for all these purposes?

So here would be my suggested refereeing configuration:

  • Technical system to track when the ball passes out of bounds (including for a goal), notifying the referees (but not the spectators) when it does, as well as track when any player is in an offside position
  • Main on-field referee who is in absolute charge of all calls (as currently)
  • Two assistant referees, each of whom is allowed to roam anywhere in their assigned half of the field (as opposed to being limited to staying on the sideline, as currently). The trade-off is more chance of a referee interfering with play, but I think it would be a worthwhile trade-off.
  • Fourth official to manage sideline/coaches/substitutions (as currently)
  • Fifth “TV” official up in the broadcast booth, to notify on-field referees of incidents plainly visible on TV or help confirm what happened. This official could also possibly help keep track of information such as how many fouls have been called against each player.

While such steps can never fully eradicate “bad calls”, I think they could go a long way toward reducing some of the worst problems currently encountered in officiating a soccer game.

How can the San Francisco Symphony get even better?

The San Francisco Symphony is a great orchestra, and doing nearly everything right, so I probably don’t have much useful advice to offer, but there are a few areas that maybe they could make some changes for the better.

Audition process – final stages

First, here’s my understanding of the current audition process:

  1. Post the audition notice on their website and in union bulletins etc.
  2. Review submitted resumes (typically 150 or so). Invite some to the audition (maybe 30); others, request a tape to review before inviting to audition. (Few are ever chosen from this latter group.)
  3. Start audition process, round one. Candidates pay their way to come to the auditions. These are blind auditions: Committee is behind a screen in the hall so can’t see the candidate. Candidate doesn’t speak, and there’s carpet to walk on on the stage so their shoes can’t be identified by sound. Music is mostly selected orchestral excerpts. (The list is published in advance.) About half are eliminated in each round.
  4. Invite (recruit) a few selected musicians they already know about. They get to skip the first round or two of auditions.
  5. When it gets to the final round – say the last 2-3 candidates, the auditions become non-blind. The screen comes down, the committee knows who each candidate is for the first time, and they can see and converse with each other. But the format otherwise stays the same.

It’s this last round that I would change. Now that it’s non-blind, why keep the same “solo” format? First, it may be useful to sit down with them, like a normal interview. But more useful, I think, would be playing together with existing Symphony musicians in ensemble. Here are the formats I’d propose:

  • Mini sectional. Play the same orchestral excerpts, only now with several other members of the same section. Also use a conductor. You want to see how they fit in, blend, keep up, follow, adapt – but also how they lead, contribute, and make the group better.
  • Mini chamber ensemble. More orchestra excerpts, again with a conductor, but now with members of several related sections, but one player to a part. Essentially a string quartet or wind or brass quintet, playing orchestra and chamber orchestra repertory.
  • Actual chamber music. Play real string quartets, wind quintets, brass quintets. No conductor. Now you see what they think about the music, how they communicate and collaborate. Also, chamber music is a big deal for the Symphony (as it should be), so make sure they are in tune with that.

This would be followed by a two-week trial for one or two of the candidates (which I believe already happens).

At any rate, the goal is to find out how well the candidate fits in with the orchestra (while hopefully bringing in new strengths), how well they collaborate and how much existing musicians enjoy sitting next to and playing together with them.

Some other considerations for auditions:

  1. Always try to hire candidates that are better than the musicians presently in the orchestra. If you were to guess that the candidate would be below average among their section, don’t hire them.
  2. Always favor talent over experience. The latter is easy to acquire with the passage of time. Improvement in the former is far less likely.

More chamber music – preconcert concerts instead of lectures

I believe playing chamber music is a great way to improve the quality of an orchestra. It challenges musicians with interesting music, it totally exposes them so there’s nowhere to hide, and the musicians love it: it’s fun. So it’s a good way to attract and retain good musicians.

The SFS already has a chamber music series of a dozen or so concerts. It has also experimented with pre-concert concerts (chamber music, chorus, etc.) before regular concerts, in place of the usual “get to know the music” lecture. The one time I saw this, I really liked it. I’d like to see this become a regular practice – chamber music, solo sonatas, etc., with the lectures only used for certain series with a more educational/first-timer focus. The rest of the lectures should go online, where they belong.

Davies turns out to be a surprisingly wonderful venue for chamber music, and these mini-concerts could help advertise their regular chamber music series, while giving the musicians more opportunities to play music they like, the way they like it, and to feature individual musicians more than the full Symphony concerts allow.

More opera – summer opera residency? Exchange program with SF Opera?

Opera is another area I think the Symphony could play more of. Not because I love opera necessarily – if so I’d just go to the SF Opera. No, the reason is again to improve their symphony concerts. Opera is a different and wonderful (and often challenging) repertoire they seldom get to play. But more to the point are two factors, both of which could make an orchestra better:

  1. Opera is mainly accompaniment of solo singers. Doing a lot of this can lead to increased sensitivity and subtlety.
  2. On the other hand, opera is dramatic, and being in the presence of melodrama, sets, costumes, colored lights, and loud bangs can fend off inhibition.
The Symphony could do a summer opera production (possibly at some outdoor location, the way Boston and Chicago go to a countryside festival in the summer). They could do more semi-staged operas during the season. Or they could start a program where Symphony and Opera musicians could trade places for a couple of weeks during the year.

More TV/video

I do like that the Symphony continues to make CDs – they remain more important for classical  audiences than pop listeners – but this isn’t the key to success it once was.

As for television, I love the PBS broadcasts they do, and would love to see more.

But I’d also like to see them find other ways to get their concerts on TV – or internet streaming – regularly, at lower cost.

I think there’s something about being on TV that increases their confidence and makes them more pleasing to watch. They play to the camera (and, if done enough, they might habitually play to the camera even when cameras are not present). This, I think, is what makes the Berlin Philharmonic such fun yo watch: they’re used to being on television.

Also, they tend to spiff up their appearance – after thinking “I look so unkempt or dowdy or pale on TV”, they’re going to pay more attention to their (and their colleagues’) appearance. Which makes sense, seeing as they’re performing musicians where the audience’ attention is focused on them.

But besides regular concert videos and “Great Performances” specials, what I’d love to see them do is a behind-the-scenes reality series on Discovery, Ovation or Bravo. Something like Deadliest Catch, Ice Road Truckers, or the ones about navigating container ships or building skyscrapers. This could follow the musicians and staff of the orchestra as the prepare for concerts, recordings, tours, auditions, etc.

Even more prominently, they could also host a competitive reality show, like a mix of American Idol and the YouTube Symphony Orchestra – perhaps a concerto competition for young conservatory graduates to perform with the Symphony on tour?

So those are some of my ideas for ways the SF Symphony might be able to improve even more.

Tagged , , ,

Officiating in Soccer

Like many people, I bemoan the officiating in soccer, especially in MLS.

I think this is particularly important for Major League Soccer in the US (MLS), because allowing excessive fouling degrades the quality of soccer, and the quality in MLS is already borderline; lax officiating only perpetuates the problem. Think about it:

  • Would you rather watch creative attacking players score goals (or at least make credible attempts), or watch those players get hacked, grabbed, or pushed to prevent them from scoring?
  • Would you rather watch your favorite players play, or watch them sit, injured, on the bench?
  • Would you rather watch attacking players confidently try to score, or defensively look for the foul they know is coming?

I thought so.

So what is the problem? I identify three factors:

  1. Inexperienced referees.
  2. Inconsistent or too-lenient refereeing.
  3. Not enough “eyes” to see what’s happening.

I’ll address each of these problems individually.

Inexperienced Referees

Major League Soccer is addressing this with the Professional Referee Organization (PRO), modeled after England’s Professional Game Match Officials Board and headed by English referee Peter Walton. Like player development, referee development is a long, slow process. They are now doing the best they can by taking some very smart steps in the right direction.

Inconsistent and Overly Lenient Calls

While the PRO will undoubtedly address this in many regards, I also feel that officiating in general is too lenient and inconsistent (not just in the US). I understand, and agree with, the approach of being friendly with the players, and I accep the sentiment that it’s a physical game, but I feel the game would be served better by making calls more strictly and consistently. The call can be made in a friendly manner, but it still needs to be an actual foul, not just a verbal warning.

What I mean is, there is no need to give players warnings – they already know the rules, they are professionals, and they play the game every week. There should be no need of giving a verbal warning for a first foul, and gradually escalating to a yellow card. Players just anticipate that since they will get a graduated response, there’s a certain amount they can get away with, and they will milk that for all it’s worth.

Instead, I suggest that anything that is ever a foul should always be a foul; anything that is ever a yellow card should always be a yellow card; anything that is ever a red card should always be a red card. Of course there are always judgment calls between these, but the goal should be 100% consistency, not a graduated response.

So how to deal with persistent infringement? Simple, use the same approach as is already used with two yellows = red, or basketball’s five foul limit. Simply specify that the third foul by an individual player is an automatic yellow. The third foul after the previous yellow by that player would be a second yellow, and hence ejection. Simple, predictable, strict.

I would also suggest that any foul involving clearly playing the player, not the ball (such as shirt tugging, or bear-hugging) be an automatic yellow card – it’s just counter to the beauty of the game. And I don’t think it should require the ball be in play to call a foul – would you not eject a player for punching another player even if play is not active? Any such infractions that substantially stop a developing attack (not just the last man) should be an automatic red, for instance, grabbing a player by the arm to stop them from passing a long ball on a breakaway. Similarly, I think they should formalize an experiment tried in a youth competition some years ago – give a yellow card for touching the ball in a dead-ball situation if the other team has possession. I’m sick of seeing players pick the ball up from where the free kick should be taken and carry it down the field – I can’t believe that’s not a card.

If refs started doing this, it would indeed be a shock to all the players, but I think it would help the game immensely. I would imagine the first games of a season (pre-season) going something like this: a dozen or more yellow cards, four or five ejections, and a handful of penalty kicks. On the first corner kick of the game alone, I’d expect, before the ball even gets into play, the ref to whistle madly then go hand out three or four yellow cards. Moans of “they’re ruining the game” would be heard, but after a few games of consistent application, the number of fouls, cards, ejections, and penalties would start to drop back to normal – not because the refs get more lenient, but rather because the players learn they can’t get away with fouling any more.

This would help MLS play greatly, at least in the long run: defenders would need to defend with speed, skill, and smarts (good positioning), rather than strength or fouling. Attackers would be able to attack with greater freedom, rather than always anticipating a clumsy foul. Eventually, the league would select for players based on skill and tactical acumen rather than strength and ability to foul, and in the longer run this could influence youth player development.

Not enough eyes to see everything

This is the gist of numerous proposals to help referees become more aware of what’s happening on the field. It started with the guise of errors in awarding goals, but really extends to all aspects of the game. Here are some of the proposals:

  • Goal-line technology. This is where it started, due to a number of botched goal calls. I think it’s a great idea, because all the other proposals might cut the likelihood of error, but not eliminate it as this sort of technology can. In the short run, it will probably be limited to just the goal mouth, but later it could detect a ball going out of bounds anywhere on the field. If it can track players, too, and where the ball contacts a player, then it could eventually even indicate possession, and maybe even offside? That would allow the actual referees to concentrate on fouls and player behavior. These latter applications may be years down the road, though.
  • Two extra assistant referees. Even with goal-line technology, I think MLS should definitely start using two additional assistant referees. These would be placed behind the goal, on the opposite side of the field from the linesman in this half of the field (definitely should be opposite, though they’ve experimented with being on near side as well), ranging from the corner flag to the near goal post. Unlike some of the other proposals, I don’t think they should be allowed to set foot on the field during play – they’d be strictly linesmen, like the other assistants. Here are some of the benefits:
    • Their angle of view is in the blind spot of the other assistant referees, and usually the main ref as well. That is, they can triangulate on the action, with coverage from all sides. I think a lot of the shirt-tugging etc. that happens around the penalty area would stop very quickly with such refs in place, because the blind spot typically exploited by players would no longer exist.
    • The action is closest to them exactly when it is farthest from the other assistant referees. For instance, near the upper left corner flag, it’s a half field length from one assistant referee and a full field width from the other. But the additional assistant could be standing right there.
    • They simply provide two more sets of eyes (though usually only one that is useful, in terms of being close to the play).
  • Video referee. This has also been talked about, and it makes a lot of sense. Among other things, it only makes sense that some member of the officiating crew can see the same things that are obvious to the TV audience (which is usually many more people than are actually in the stadium). Furthermore, the various and unusual angles, close-ups, slow-motion replays, etc. can be invaluable in learning what’s going on, particularly when combined with on-field information. However, I wouldn’t follow the NFL’s lead at all. Among other things, all decisions remain with the main ref, and part of his duty is to make quick decisions and resume play, so a video ref should interfere with that as little as possible. Here’s how I would do it:
    • Video ref is up in the broadcast booth; the main ref cannot review the video himself, he has to rely on voice communication with the video ref.
    • Video ref sees the same broadcast, minus onscreen graphics and commentator voiceovers, as the TV audience.
    • The video ref has no pause/slow-mo/rewind capability, he can just watch the broadcast unfold however it is produced by the TV crew. He can, however, benefit from whatever replays and additional angles the broadcast already includes.
    • The main functions woud be:
      1. Notify the referee of infractions that may have been missed by other refs
      2. Help clarify what happened in situations where play is already stopped and ref is already conferring with his crew
      3. Notify ref after the fact of infractions that may have occurred previously, but where it is too late to stop play for (though he can still give a foul or card).
      4. Increase confidence in calls by agreeing with consensus view where appropriate.
  • “Stats” ref to help keep track of how many fouls. With a formal system of counting fouls before giving a “persistent infringement” yellow, an additional assistant ref to keep track of this sort of data could help the main ref immensely. Yes, this is a total of 8 referees, but if the officiating team can master efficient communications, it could work very well.

Ultimately, I’d like to see Major League Soccer volunteer to become a leader in new refereeing techniques. I’d also like to see them become known as the strictest, most consistently-officiated league in the world. I think in future years the resulting clean play would benefit both the enjoyability of watching the game and the quality of players.

Tagged
%d bloggers like this: