Microsoft Professional Program Cybersecurity

In a previous job, amongst many other things I was responsible for the tending of a small herd of HP DL-whatever servers, ~1500 machines all running RHEL. These were considered mission-critical for the business unit. I used various tools for this, one of which was the Integrated Remote Console. This would be used when some hardware failed, which happened surprisingly often; HP quality is not what it was in the good old days of PA-RISC and HP-UX. But I digress.

One day I came into work and found that I had been locked out of various systems and that my name was on a spreadsheet sent to “senior management” because I had allegedly been caught “participating in hacker chatrooms”. I did try to explain to the cybersecurity team that actually this IRC.EXE was nothing to do with that but they were soooo pleased with themselves for having busted a hacking ring, or so they thought, that they wouldn’t listen. So I went and spoke to the head of the business instead, and told him that I could no longer support his mission-critical platform. Well that got sorted out very quickly. I never got an apology from the cyber guys, but they never bothered me again. Incidentally, the ad for this job had even included the phrase we’re looking for real hackers, using the more traditional meaning of the word :-)

I could tell a dozen stories like that from various points in my career. Cybersecurity people are a very mixed bag. Some are very good indeed and can reason about complex systems and their failure modes in ways that had never even occurred to me before. But some, lacking the background of software or systems engineers, have no context and no real idea about what the systems they are supposedly defending actually do, and what’s normal, legitimate activity and what’s suspicious, so they blunder around like bulls in a china shop. And this latter group, for some unknown reason, see themselves as an elite, aloof from common engineers, which doesn’t help anyone – security is a team sport, and for it to work, everyone must be a player, including engineers and end-users. Understanding how people use the systems day-to-day is very important, unless security seems natural to them, people will try to find workarounds for it, like USB drives, Dropbox, leaving their unencrypted laptop in the car

Anyway I do have such a background, and guys like me have always done the bread-and-butter of cyber (or infosec as it used to be called) and now I have completed a course to round out my knowledge of this field too:

Screenshot 2019-08-16 at 19.54.54

What do I mean by bread-and-butter? Well, most security work isn’t especially glamorous and exciting. Keeping the hardware and software inventory up to date, staying on top of newly discovered vulnerabilities and newly released patches from vendors, triaging them into change management, ticking off checklists, administering routine updates of AD or similar, maintaining systems that collect and scan logfiles, dealing with false positives (manual investigation) and fine-tuning the triggers, probing our own systems with fuzzers and suchlike, archiving things for compliance… Educating and if necessary enforcing good security practice throughout the organisation. Identifying requirements, evaluating solutions, presenting the findings, same as any other product or service the organisation might use. The occasional forensic analysis if there is some possible indication-of-compromise. Reporting on all of this to “key stakeholders”.

It’s important work and it needs to be done but it’s also, for most of us, on top of our real jobs. The full-time cyber guys are off doing… whatever it is they do all day. I’m kidding. Sort of. I think most outsiders think that that is what the entire field is!


Unfortunately, Microsoft have just announced that they’re retiring the MPP so this will be my last one (I was going to do the IoT track next year). That’s a real pity because MPP was one of the few that taught conceptual and theoretical skills along with hands-on technical, and would thus retain long-term value even when the specific tools used on the course were superseded. That is what attracted me to it in the first place. The new “role based certifications” are purely about operating particular versions of particular products, which is a big step backwards to short-term value only. It was a bit sudden as well, there will be many people who, with other commitments and so on, will struggle to finish by 31st Dec. I spent nearly a year doing the Data Science track, on and off. I did this one quickly because, as I mentioned, alot of it was already familiar!

For posterity’s sake I will preserve the MPP Cybersecurity curriculum here:

  1. INF246x: Enterprise Security Fundamentals
  2. INF249x: Threat Detection: Planning for a Secure Enterprise
  3. INF250x: Planning a Security Incident Response
  4. INF251x: Powershell Security Best Practices
  5. INF258x: Windows 10 Security Features
  6. INF259x: Windows Server 2016 Security Features
  7. DAT243x: Securing Data in Azure and SQL Server
  8. INF253x: Managing Identity
  9. INF260x: Microsoft Azure Security Services
  10. INF261x: Microsoft Professional Capstone : Cybersecurity

A good mix of general principles and technical specifics, even the particular Windows courses covered material that was applicable to other platforms. I even learned some new things about SQL Server! The capstone involved defending a mixed network of Windows 8.1, Ubuntu 14 and Windows Server 2012 from a simulated attack in real time – with the older platforms alot of the more modern tooling from Windows 10 and 2016 was not available so you had to use your wits. Fun!

I suppose I’ll have to do the same for the other ones as well. I really hope Microsoft will reconsider.

Going forwards, this was an entirely defensive course, and since I now have to look beyond Microsoft for CPD, maybe I’ll try something offensive.

Posted in azure, Cloud, Cyber, edx, Linux, Microsoft, Random thoughts, SQL Server | Tagged , , , , , , | Leave a comment

Forgotten Features

The ancient wizards who defined the ASCII standard knew what they were doing. ASCII for those who have not come across it, is the standard means of encoding mainly textual data as a stream of 7- or 8-bit bytes¹ for transmission or storage. It’s how your computer works inside. So the character A has value 65 which is 01000001 in binary, on the wire or on the disk. Codes 32-126 are printable characters like these you are reading. Characters 1-31 are special, they are called control codes and one use for them is for devices to communicate instructions, such as a device reading code 10 knows to advance to a new line, or code 8 to move one space backwards (these are how your ↵ and ← keys work, under the covers). Some of them don’t really make sense in the modern context, carriage return (code 13) in its original usage would cause the receiver to physically move a dot-matrix printer head back to the left and line feed would physically advance the fanfold paper to the next line, but nevertheless the codes are still there and modern operating systems know how to interpret them for modern devices.

ASCII control codes also include values for structuring ASCII files or streams of bytes, for example to represent tabular data. The important thing about them is that they are outside the range of values that they delineate, so their meaning is always unambiguous. Code 28 is file separator, 29 group separator, 30 record separator, 31 unit separator. So it is easy to encode one or more tables of data in one chunk of ASCII. Or at least it should be but…

There is no “CSV standard”, so the format is operationally defined by the many applications which read and write it. The lack of a standard means that subtle differences often exist in the data produced and consumed by different applications. These differences can make it annoying to process CSV files from multiple sources

Everyone seems to have entirely forgotten that these exist! There is nothing weird or exotic about them, the first edition of the standard was published in 1963 and ASCII has been baked into nearly every computer ever since! Everytime I need to deal with CSV files², a format which is full of edge cases that noone can agree on, I despair a little at the state of the profession and our claims of being software engineers. And that’s before we even get onto more recent wheel-reinventing like XML, JSON, YAML… Anything that uses normal printable characters³ to delineate records or otherwise impose structure becomes unwieldy as soon as you want to have one of those characters in the data, for a start. Everyone who has had to deal with angle brackets or ampersands in HTML has been bitten by this at one point! I suppose one point in CSV’s favour is that at least each delineator is only one byte, unlike the others which have a great deal of overhead.

¹ The extra bit could be used to get another 127 chars, or for error detection.
² Many times per day
³ There’s no reason a text editor couldn’t display something for the control codes, when it can easily show paragraph breaks as ¶ or whitespace as ◊.

Posted in data science, Linux, Python, Random thoughts | Tagged | Leave a comment

Learning a New Language

Generally, every program I write, regardless of what useful thing it actually does, and regardless of what programming language it is written in, has to do certain things, which usually includes

  • Importing a library and calling functions contained within that library
  • Handling datatypes such as converting between strings and integers, and knowing when this is implicit or explicit, how dates and times work, and so on
  • Getting command line parameters or parsing a configuration file
  • Writing log messages such as to files or the system log
  • Handling errors and exceptions
  • Connecting to services such as a database, a REST API, a message bus etc
  • Reading and writing files to the disk, or to blob storage or whatever it’s called this week
  • Spawning threads and processes and communicating between them
  • Building a package whether that’s a self-contained binary, an RPM, an OCI container, whatever is native to that language and the platform

It’s easy to find examples of most of these things using resources such as Rosetta Code and my first real program will be a horrific cut and paste mess – but it will get me started and I’ll soon refine it and absorb the idiomatic patterns of the language and soon be writing fluently in it, and knowing my way around the ecosystem, what libraries are available, which are the strengths and weaknesses of the language, the libraries, the community and so on. Once you have done this a few times it becomes easy and you can stop worrying so much about being a “language X programmer” and concentrate on the important stuff, which is the problem domain you are working in.

Posted in azure, C++, Cloud, data science, f#, Haskell, Microsoft, Ocaml, Python, R, Random thoughts, Scala | Tagged , , | Leave a comment

ML in the Real World

About a decade ago now, I was doing a lot of what we would now call ML†, using the what is now called data exhaust‡ from the production infrastructure of an exchange, both the OLTP and DW sides. It was simple timeseries stuff, just lots of it. I could look at the storage arrays, say, and make very accurate predictions about when some threshold would be breached, very far in advance. I could get from the ticketing system when a purchase order for more capacity was raised, and when it was fitted, and say exactly when to place the order with the vendor to get the parts delivered on time. Same with the time taken to fetch a tape from offsite. I looked at batch job completion time vs CPUs, not only did I know well in advance when we would need more, my algos had worked out for themselves that there were periodic spikes such as end-of-month reporting, and knew that there was no need to alert. All sorts of stuff like this, I thought it was pretty clever and I was quite pleased with myself.

In practice tho’, no-one cared. We went on ordering more disks and shelves when the dumb Nagios alert fired, so long as it could be added before there was an actual, during-trading-hours outage, that was good enough so why change? We added more CPUs when the moaning of the analysts reached the ears of the CEO and he in turn moaned to our boss about it, there was no formal SLA on job completions. And everyone who had been there 6 months or more knew which alerts to ignore and simply did that, no-one even bothered to blackout them (also, because there was nothing that could be done about them anyway).

I had a lot of fun doing all this, and I learnt a lot, this was the time I got seriously into Python, NumPy, Matplotlib and so on, skills that have served me well ever since, and applied linear regression, PCA, and various other techniques, to real data. But the real lesson is, if you’re going to try to use ML in the real world, you have to use it to solve a problem that you actually have, and generally, existing problems already have a solution that is good enough that ML doesn’t tell anyone anything they hadn’t already figured out themselves or was already embedded in institutional knowledge. Maybe if we didn’t already have industry-leading uptime and transaction volumes on human intuition alone, it might have been taken more seriously. I think many if not most ML practitioners are going to run into this scenario at some point, and need to have a story ready, which I didn’t.

† It was just called applied or predictive statistics back then
‡ It was just called metrics back then, or logging, gotta keep up with the buzzwords!

Posted in AI, data science, Python, Random thoughts | Tagged , , | Leave a comment

Article 13

A lot of fuss is being made about the potential impact of so-called Article 13 on YouTube. I think there are two possible outcomes that would be acceptable to me. Either:

  • YouTube verifies the identities of all uploaders, and the uploader is fully responsible for the legality all new content, relegating YouTube to the status of a common carrier. After a brief grace period, any pre-existing content that isn’t claimed by a validated user is purged.
  • YouTube moderates† every upload before it is publicly visible and is responsible itself if it re-publishes anything that is illegal, and of course for any and all pre-existing content that they continue to publish without having retro-moderated.

Their current position, which seems to be that they can’t make as much profit as they’d like with the overhead of complying with the law, isn’t really justifiable IMHO.

† This could be done with AI, if they are willing to pay the fine/compensation to the real copyright holder, when it gets it wrong.

Posted in Random thoughts | Leave a comment

Advice for Students

I went to UCL to study a 4-year programme in Mechanical Engineering. I wanted to work with big gas and steam turbines, for propulsion or bulk power generation. While there I realised that control systems were very interesting as well. UCL has (or at least had) a policy of housing every Fresher, and as many Finalists as it could fit into the remaining space. In my first year I was in Halls, then in rented accomodation with friends from my course for the second and third years, then for the fourth and final year we all applied to go back into Halls. Everyone was accepted… apart from me. I don’t blame UCL obviously; it was just a lottery.

But that was a turning point in my life, the point at which I drifted away both from academia and that branch of engineering. In retrospect I guess I could have found some people who were in the same boat and rented a house with them, kept the immersion in college life and finished the year, but it was too easy not to. I actively avoid Java now, but in the mid-late 90s it was both cool and hot, as an early adopter I could easily get work much, much better paid than any of the big engineering companies offered their graduate trainees, and I would get to stay in London, which I thought at the time was very important. I kept working like I had been over the summer, I was living off-campus, I thought I could find a way to make it all work, but I couldn’t and before I even realized, the year was over, I had missed too many lectures and not even made a start on my dissertation. I graduated with a BEng instead of an MEng.

It would be a stretch to say I regretted any of this; some of the friends I made working for a startup that year are still close friends today, for example, and I have built a solid career in the software engineering field. But at the same time I am conscious of the lost opportunity; London and all it offered would always have been there, whereas when the final term of the final year ends, that chapter is over forever. And maybe if I had stayed in that field I would be working at SpaceX or something now! So if I have any advice for students starting this year it’s to make the most of your time as an undergrad because it will be over in the blink of an eye. But also if an opportunity is there, take it!

 

Posted in Random thoughts | Leave a comment

Microsoft Professional Program Artificial Intelligence

Building on the momentum† of completing the Data Science track of the Microsoft
Professional Program
, and inspired by the amazing season 2 of Westworld, I have now also completed the Artificial Intelligence track, Microsoft’s internal AI course just opened to the public. This combines theory with Python programming (no R option this time sadly) for deep learning (DL) and reinforcement learning (RL), leading up to a Capstone project, which I completed with Keras and CNTK, scoring 100% this time. Of the 4 available optional courses, I chose Natural Language Processing. The track also includes a course on the ethical implications of AI/machine learning/data science, something that should be mandatory for the employees of certain companies…

Screen Shot 2018-08-08 at 07.24.42

I had had some exposure to neural nets earlier but this was my first encounter with RL, and that was easily my favourite and the most rewarding part, and definitely something I want to explore further, with tools like OpenAI Gym.  A fair amount of independent reading is needed to answer the assessment questions in this and the other more advanced courses; obviously I was not looking to be spoon-fed but it would have been better for it to be self-contained. Rumsfeld’s Theory applies here; if you don’t know what you don’t know, how can you assess the validity or currency of an external source? Such as what has changed in Sutton & Barto between the 1st edition (1998) and the 2nd (October 2018, so not actually published yet!) , and which one was the person who set the assessment questions reading? Or the latest edition of Jurafsky & Martin?Many students raised this concern in the forum and the edX proctor said they were taking the feedback on board so perhaps by the time any readers of this blog come to it, it will be improved.  The NLP course was particularly bad for this, I wonder if something was missed when MS reworked them for an external audience? So frustrating when it is such an interesting subject!

Obviously there is not the depth of theory in these relatively short courses to do academic research in the field of AI. Each of the later courses  (7-9) takes a few weeks but to go fully in depth would take a year or more. But there is certainly enough to understand how the relevant maths corresponds to and interacts with the moving parts, and to confidently identify situations or problems DL and RL could be applied to, and to subsequently implement and operationalize a solution with open source tooling, Azure, or both. Overall I am pretty happy with the experience. I learnt an awful lot, and have plenty of avenues in addition to RL mentioned previously to go on exploring, and have picked up both a long term foundation and some skills that are immediately useful in the short term. Understanding the maths is so important to be able to develop intuition, and is an investment that will continue to pay off even as the technologies change. Working on this part time over several months, I am very conscious that a lot of this stuff is quite “use it or lose it”‘ so I will need to maintain the momentum and internalize it all properly. For my next course I think I’ll do Neuronal Dynamics or maybe something purely practical.

Oh, and I previously mentioned that I had finally upgraded my late-2008 Macbook Pro to a Surface Laptop. The lack of a discrete GPU‡ on this particular model means that the final computation for the Capstone took about an hour to complete… On a NC6 instance in Azure I am seeing speedups of 4-10× on the K80, which is actually less than I had expected, but still pretty good and I expect the gap would open up with larger datasets. I think I will stick with renting a GPU instance for now, until my Azure bill indicates its time to invest in a desktop PC with a 1080, I’m just not sure that it makes sense on a laptop. Extensive use is made in these courses of Jupyter Notebook, which when running locally is pretty clunky compared to the MathCAD I remember using as a Mech Eng undergrad in the ’90’s, but there is no denying that Azure Notebooks is very convenient, and it’s free!

 

It begins with the birth of a new people, and the choices they’ll have to make and the people they will decide to become.

Did I mention that I am obsessed with Westworld?


† A 3-course overlap/headstart!

PlaidML is nearly 2x as fast as CNTK on the same processor with integrated GPU, but less accuracy in my experiments so you need more epochs anyway, it depends where the lines cross for your specific hardware and workload.

Posted in AI, azure, C++, Cloud, data science, edx, Microsoft, Python, R | Tagged , , , , , , , , , , | 1 Comment