# Mission Impossible

By
,
History
Published 2022-08-07

# Mission Impossible

Sometimes you may feel like this when tackling an issue that just cannot be solved
Sometimes you may feel like this when tackling an issue that just cannot be solved

Well I figured this would be a good section on the website to breifly mention issues or projects that I have been involved in, that have either come to me as no-one else was able to previously solve the issue, or I have developed a way to stream-line the process to either automate the job or to put a process in place which would allow the user to use a bespoke application to solve the problem. I'm not saying I provided the best solution, but I was able to provide a solution to fix the given problem which no-one else had seemed able to do.

# I been doing IT a long time 🕹️

Too many years have past since I started IT that I kind of feel I should have been able to retire about now. Sadly not, as I haven't even paid into a pension, I was just hoping to get super-rich from doing IT, but it seems in hindsight that being a lorry driver would have earnt me much more money with a lot less stress. However after investing just over half my life to working in the IT industry I do not want to give up on it just yet. My initial start in IT was off of a Government apprentince scheme, where I got paid a whole £50 a week to learn IT. After a few months you got work placement, where sadly you still only got paid £50 a week, and I had to use a large proportion of that money just to travel to work. So essentially this was me doing charity work, as I wasn't even earning anything from working 9 to 5 for five days a week. I got out of the apprentince scheme as it felt more like slave labour. So when I started out in IT, Windows NT was about as a server OS, Windows 2000 clients, or Windows 98 running some weird DOS programs to manage the business. Oh how times have changed. Like the first IT job I got after leaving the IT apprentince scheme, it was for a plumbing company. I was situated upstairs in this small office with an even smaller window which didn't open. I was allowed to smoke in that office whilst working with a non-smoker. Wow times really have changed. I mean when I was in my early 20s I was doing a lot of work in London going to banks and installing and configuring Remedy. I remember the company I was working for was charging £800 a day to the client for my services, yet I was only taking home £880 a month! I use to spend at least half of every month working in London, damn they were some really long 14+ hour working days, I never really thought about it at the time, but that company was getting my yearly salary in like just under half a month. I moved onto doing Helpdesk work, then onto second line support work and then onto third line support. Although IT support has helped me pay the bills over the years I do not really feel like I belong there anymore as it seems every-other support person out there is still pointing and clicking there way through the solution. Just so much time is wasted pointing and clicking, if you actually dedicate sometime to the actual problem you could put a proper solution into place. Then again this might end-up putting a lot of IT support people out of business, if computers had decent scripts running on them which would automatically fix the problem. I am a very strong believer in letting the computer do the hard work, not because I am lazy, but because computers were built and designed to process tasks far more quickly and efficiently than any human-being could process, and remove the human-error element from the situation as well. Thankfully I have had some opportunities to be able to produce my own soluition to the problem, which had landed my way. So I thought that this should get a little blog section of it's own to hopefully demonstrate I can do more than just create quirky modules. When I am working for a company I don't post any scripts I write in worktime to my own github account. I have regretted this at times, especially when you know you have already written a script to fix that problem, but it's at your last job. Well I think this makes you a better scripter by re-writing similar scripts you may have done in the past, as you can always improve the scripts you write. By doing simple things like error handling, logging of scripts output, emailing the outcome to yourself and many other things.

# Daily Tasks 📆

I done this in several jobs now, where on a daily basis you need to check X, Y and Z to make sure everything is running smoothly on the network or what-ever checks you need to do. Most recently the daily checks tasks was pretty big, big enough that the company had employed a sole person to do this task. This seemed a bit crazy to me, as after a few simple internet searches I saw that the various web-based applications you had to log into all had API or Powershell modules I could use to interact with these applicaitons. So instead of having to log into 6 or 7 different applications clicking here and clicking there to obtain the information I wrote a function. In the function I collected all the required information and placed this into various HTML tables, and emailed this to myself each day. This meant that before I even made it into the office, I had an email that contained all the information the person doing the daily checks would need to check. I could easily view the uptime of over 100 servers, backup information on numerous servers, storage information, health information on hardware, and lots more. Previously to this HTML email report, I had actually designed a full on IT dashboard monitoring various aspects of the network and user consumption of server processes. As well as automatic security warnings on accounts expiring or not logging in. Although by no means is this one of my greatest achievements but the fact that it had ever been done before, and it basically automated one persons' job that to me is a mission impossible, as I guess no one thought it was possible hence doing it manually.

# Netscaler ⚖️

Again the company I was at was using Netscaler and they knew it had API capabilities but didn't know how to use them. This was quite mad as we are talking a big company with lots of employees, and lots of different departments with even more people in, people paid to develop solutions yet here I was at 3rd line being asked to automate the processes of several netscaler tasks mainly relating to load-balancing. This allowed me to write a script to reboot 3 critical servers that were constantly having issues on a daily basis to remove some of the errors happening on a daily basis. Again this used to be done by the on-call person at silly-o-clock in the morning or evening. This meant that the person on-call was also getting paid for this time. By implementing this Netscaler API to reboot these servers daily got rid of the constant out of hour calls, which in-turn was saving the company money not having to constantly pay the on-call person over-time. As well as focusing on these specific problematic servers, I also created a bespoke GUI application to allow the developers to upgrade their own websites which usually was consuming one hour of my day load balancing the webservers to allow them to do the upgrade. I made the GUI so simple to use, and as well as being super easy to use, I also built in a log to record who used it and when (incase something went wrong, or for auditing purposes), as well as preventing the user from making an error, by having advanced coding in the background verifying correct information before letting the user continue. This then meant the developers could do their own upgrades instead of me dedicating time to do something they could easily do if they had a specific application that only allowed them to do what they needed. Again saving time, money and resources.

# Server Issue 😲

I was speaking to a techinical architect on this server issue. He informed me to fix it you needed to restart a particular service not once but twice to fix the issue. The problem was they didn't know when to restart it as the service never stopped running, but the service the server was providing became unreponsive to the end user. The software supplier had confirmed there was a bug in the release we was using and basically to restart the service at a random time in the day to hopefully fix the issue. This was the lamest excuse or fix I had heard in quite sometime. The craxy thing in my head was I was speaking to someone more senior than me and they didn't seem to have a scooby-doo on how to fix it. Looking in the applivation event log (using powershell) I could see a speicific error code happening everytime the service on the server needed restarting twice. So now I had the information I needed, I wrote a script in about 15 minutes to run every few minutes on the server to check the application log, check if this particular error had occured then if it had to restart the particular service twice. Even more suprising was when I had to inform a developer how I had developed a solution to the software bug until the software supplier could provide a fix. So again by no means my best mission impossible, but just the fact you got senior architects, developers and an entire software company who cannot fix their own software issues. This particular problem was happening numerous times a day and at anytime of the day. Again by me implementing this script and recording the activity in the eventlog allowed me to inform the developer how many times my script had saved the day, and stopped the on-call person getting called at silly-o-clock. Again saving time and saving money, it's just a bit mad that I came up with the solution to the problem in like 15 minutes, when all these other more senior people couldn't provide any solution.

# Citrix 📀

Not claiming to be a citrix master by any-means but when a colleague is asking another colleague to literally point and click over a hundred different machines to put them into maintenance mode, this to me is just screaming automation. So although this task didn't directly fall to me, just the crazyness of someone asking someone else to help them point and click a load of servers, it just seems so prehistoric to me of doing things that way. Again I was able to smash a script together to then turn this into a function so you could query one or more ESXI boxes to then put all the virtual servers on that device into maintenance mode so the box could be taken down without any end users suffering loose of data. Again I made another function to then take all the virtual machines out of maintence mode. To me this is just how hours and hours of IT time is wasted, by pointing and clicking. To make it worse it was normally taking two people to complete this task, due to the sheer amount of pointing and clicking. Instead I supplied a couple of Powershell functions that could complete this task in a few mere seconds instead of up to an hour.

# SQL DBA ♟️

Dealing with databases does require a lot of attention and detail. I guess that's why some people think it would be better to do it manually. Not me, I think of you write a good script properly once, and make that into a function you can then use that on every SQL server going forward. So this particular task was making sure certain jobs were setup and configured on each SQL server. As this was repeating the same process on each SQL box, to me this was needing to be automated, else this was going to take such a long time to complete with the list of servers this needed applying to. Thankfully Powershell has the dbatools module which allows you to automate anyhting you would do in SSMS you can do using the Powershell module. Whilst deploying the function to various SQL servers I discovered a server in urgent need of TLC. This to me was mission impossible as no-one else would have been able to script this task, as the instructions were all manual. To me this was another big winner as saved me so much time in repeating the same processs over and over again, without making human mistakes.

# VMWare 💿

I remember way back when I was working at the council how vitual machines were going to be the future of IT. Sure enough this one company I was at had thousands and thousands of virtual machines. So to me this meant that I needed to be using Powershell instead of VSphere as it would be a lot quicker, as I wouldn't need to point and click to find the VM. Using various VM Tools modules I was able to script several documented processes such as increasing disk size of a VM. Each task I automated meant that I no longer had to do this manually, which mean time and money being saved. I ended up writing serveral VMWare scripts to automate the most frequent problems that occured. Again no-one had asked me to do this, but as I was having to do various tasks such as high CPU on numerous virtual machines throughout the day. Instead of manually going through the process of logging onto VSphere, searching for the machine, logging on remotely to the machine and then looking at the problem, by the time I you done all that I had already invoked the function on the machine from my Powershell session and fixed the problem.

# IIS 🕸️

Although I am not a dedicated web administrator I do have a good understanding of it, and how to fix various issues that may occur. Especially using a few powershell modules to automate the task. Althogh the IISAdministration module is good for identifying the sites, and application pools remotely. Sometimes the information you need is in the actual IIS log file for that website. This led to me using a Mirosoft tool to parse the log files to find the issue. This got me thinking why had this not been done in Powershell? Not long after a binary module was published which then got me into binary modules. Again I took this a step further by placing all the data into Excel and making pivot-table reports. I was even more shocked when the developer said they never check the logs, but from the information I had extracted he could automatically tell why those errors were happening, so was of great use to the developer in the end. IIS is great for logging events from the websites it is monitoring, but who is monitoring the logs? This was the case for a lot of webservers I found, that the IIS logs were not being cleaned up. This resulted in one webserver nearly flooding the main operating system drive on a live webserver. This lead to me writing my own function and deploying this through solarwinds to have something in place to monitor all the log files and to make sure that no webserver would have the disk drives flooded by log files.

# Active Directory 🧑‍🤝‍🧑

Having used Active Directory from before I could script has shown me how much time I could have saved. I am more than capable of writing scripts to onboard users from a spreadsheet, or more complex scripts to find the difference in security groups of specific users. As well as updating advanced settings of user accounts, managing user accounts from a security perspective, to finding rogue accounts that should have been deleted or disabled. Using Powershell to accomplish these tasks means I can get the job done much quicker, and making my scripts into advanced functions mean they are much more versatile to either be adapted or re-used. I was able to identify over 400 hundred live accounts that had not been logged onto in over one year, as this employer had over 6,000 accounts. It wasn't my specific job to find this information, but this proved to me that this was not being monitored and could lead to massive data breach. I reported my findings to the security team, who I also helped when there was a compromised email account. As I was the only person to identify that they had not disabled outlook web access and this particular user had 8 mobiles registered to their account. By me doing these two things, no further emails were sent from the compromised account. Even though there was a dedicated security team and I heard of this compromised account by chance I was the person to fix the problem. I have also designed my own Active Directory dashboards, to monitor various aspects of user accounts and report these using various different components on the dashboard page.

# Office 365 🏢

Even though Office 365 seems to be the normal for most companies these days, it is amazing again when it comes to Administration. As no-one seems to know how to use Powershell to manage your Office 365. So why is this important, well it has happened to me on more than one occasion where Microsoft services are unavailable and you cannot use the Microsoft management portal to manage your Office 365 via point and click in a webpage. Managing Office 365 via Powershell was extremely important when I was having to deal with lots and lots of email accounts that had not migrated correctly. When I was shown how the current senior architect was fixing these issues, the demo lasted just over an hour. Then 4 hours later he had actually migrated the mailbox. I even remember my boss at the time writing to the head of IT explaining that the team I was working in did not have enough time to solve all these mailbox migration issues as it took so long to fix even one mailbox manually how we were shown, 4 hours or more. As this particular task did fall to me to fix, there was no-way I had the time to manually fix each of these mailboxes. So I spent 4 hours writing numerous functions to fix each given mailbox problem. I was then able to group the problems and apply the functions to fix those groups of issues. Within a day I had completely cleared this list, that I was expecting to take me weeks or more had I tackled the problem manually. I was also able to document the whole procedure and upload the scripts to the GIT server. This then meant anyone on the team could follow a set process which was taking the fraction of the time it took to solve the problems, from the demo we had of the senior technical architect who was originally doing this. Again this was a massive win for me, as I had also been informed there were 2,000 more mailboxes to migrate, and if they had of carried on manually fixing those issues, it probably would of created a full-time job for someone else to solely deal with. As I had written advanced functions I was able to pipe in as many mailboxes as I needed to fix. Saving lots of time and money. There was a major security breach where a particular user had their 2 factor authentication breached, and rogue emails were being sent from this end-user to lots of people within the organisation.

# Network slowness 🐢

This particular problem had been originally logged about 2 years before I even joined this company. At least I knew the problem was not my fault. However that wasn't an answer to the problem. For this problem I actually used WireShark to solve, which was kind of my motivation for making the TShark Crescendo Powershell module I made. I did a fair amount of video training for Wireshark to feel fully comfortable on running the trace and analysing it. So although I had been told that Wireshark traces had been done before and nothing was found this did not deter me from using it. Prior to using Wireshark I actually used the Sysinternals Process 3Monitor to try and find the culprit. Although I was able to obtain useful information from the tracefile and I could see delays I knew that this was a remote site accessing the data about 8 hops away. As I couldn't find the definitive problem in the process monitor trace I turned to Wireshark. The information I obtained from Wireshark gave me such granular information on the application and how it was interacting with the network. I was able to find the delay in the program and what was causing it. Sadly the backend tech for this rather expensive equipment was over 20 years old, running a single threaded application. There was also a known SMB issue with the network card in use, so I was able to swap that for a USB-C external network card. Although this did not completely solve the problem I was able to pinpoint where the latency was happening and prove to improve the latency that they needed the backend to be locally hosted as this going across 8 hops and taking up to 5 seconds to read the file on the server was causing the delay of the spinning circle and not responding on the main application. The backend solution was designed to be hosted locally which it was not, and all the reading and writing of the same file over SMB was causing severe delays.

# Intune 🎛️

Although I had never used Intune before, but as I understood it essentially just ran Powershell scripts on end-client computers. This particular problem had been open for months and months as no-one at the company knew how to script something in Intune to deploy a specific printer to specific people. I had never deployed a printer automatically using Intune and there were other aspects to take into consideration like getting the driver installed. After a bit of testing I had pacakged a solution and rolled it out. To my delight this external company could finally use the printer. The same company had also purchased loads of USB adapters to allow multiple monitors. Again none of these were working as although they were USB, they were not plug and play, and getting hold of the correct driver was a little tricky. Again using Intune I was able to deploy the driver to all the laptops on-site automatically, so that these USB multiple monitor devices would automatically work for end-users. This problem had also been open for a long time without a given solution, hence I think it is worthy of making the mission impossible list as no-one else in the company was cabaple of pulling it off.

# Datto 🎚️

One company I was at was expecting me as the sole NOC engineer to go through and fix over 300 calls a day. Yes a day, as well as record all my time for each call I closed. It almost seemed I was spending more time recording my time, than time I could use to focus on problems. Although this company had Datto they didn't have anyone in the company capable of writing Powershell scripts to use within Datto to fix the problem. Not long into the job I had written numerous Powershell datto functions I could deploy to the machine with the problem to automatically fix the problem. This then actually gave me the chance to close up to 300 calls a day, by using scripts to automatically fix the problem. This job seemed mission impossible from the get-go with the sheer volume of calls I was expected to close manually, as well as recording all my time, documenting the scripts I was writing, and writing the scripts and closing the 300+ daily calls. This was only possible through me writing bespoke scripts for each category of the problems I was solving. I was also able to change the automatic fixes they had in-place which were not configured correctly to run the scripts I had written instead. This allowed the call queue to be dramtically reduced, and to almost be in a managable state for one person to deal with. I was initially working over-time for free each evening to even make the volume of calls possible, but once the scripts were in-place, and the Datto system was being used more effectively this enabled me to stop working over-time. I was also able to provide other Datto solutions such as Blue Screen Of Death, to automatically download and install the software required to read the dump file, and display the reason why the blue screen happened on the particular machine.

# Complaint System 😡

At this work place, one of the directors came up to me telling me the company needed a complaint system in order to gain certain certification of providing customer care. He literally scribbled me two whole sentences of this complaint system. These two sentences were not readable at best, and didn't take into account any of the front-end design or more importantly back-end design of how all this relational database was going to be built, and the informaiton it would record. So one week later I had designed a back-end relational database to store all the data, and I had created a front-end web dashboard, that had it's own security group in active directory so when logging into the complaint system you were authenticated against this active directory security group. I also hosted this as a HTTPS site creating the certificate and rolling this out via GPO. This then allowed the call-handlers in the company to record any given complaint via phone or email, and then assign it to the appropriate manager, and all this informaiton was recorded into the database, so I was also able to generate various graphs and reports from this data. When-ever a complaint was assigned to a manager for further investigation an email would also be sent to notify that manager.

# Purchase Order System 😊

Normally purchase orders were being recorded on paper, which then sometimes went missing. To my knowledge no-one was collating this information, so there was no-way of knowing which department was spending what, as it was all on bits of paper. Again from having brief discussions with the Finance director I was able to put together a fully relational back-end database, as well as a front-end web-based dashboard application that was again linked to Active Directory hosted on IIS using HTTPS and issuing the correct certificate to end-clients. This then allowed the company to have a complete over-view on all the purchase orders being raised within the company, and no chance of bits of paper going missing. I designed the system with create, read, update and delete in-mind to allow users the opportunity to cancel their purchase order should it no longer be required. I was also able to build in complex business rules to decide automatically who the purchase order should go to based on the information within the purchase order.

# Vehicle Management System 🚚

At this particular company they had lots and lots of vehicles. All the milage and repairs and various other bits of data was all held on different spreadsheets located in different directory shares, which various people would update as an when. My mission was to get all this informaiton into one place, so you could manage all vehicle data from one application instead of 20 different spreadsheets. Again I put together a back-end database and a front-end dashboard to display and record all this information. Once I had completed the project and demonstrated it to the end-users there was big delight on the faces, as this meant no longer having to open numerous spreadsheets to record this information

# VAT 💸

Believe it or not one project I was assigned was to find the missing VAT the finance system had not claimed for. Now this particular issue had been going on for about 5+ years, and all the senior people in the company had looked at it, but no-one had been able to solve this issue. After much SQL querying I had found the missing link. I was able to write a complex T-SQL query and then display that data into a table grid in a GUI application I built using Powershell, the outputted table was able to show everything that had been claimed and not claimed. I conclusively proved these un-claimed VAT were truely unclaimed, by using the SQL queries and the previous sent tax forms. The company was able to claim back just over £100,000 with this information I had provided. I was then able to modify the T-SQL query and the GUI to reflect this year on year going forward. This to me was a massive win as when I initially started this project I knew VAT stood for value added tax and that was about it. Then to conclusively prove what had not been claimed, and the fact the finance software or any add-ons the company was selling for this product couldn't do what I had done, this to me was mission impossible accomplished.

More stories for bedtime coming soon
More stories for bedtime coming soon