Security Testing Prevents Data Breaches from Government Data Warehouse

Revision as of 13:56, 18 October 2021 by QAonCloud (talk | contribs) (page type updated)



(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

It seems like every day brings headlines about data breaches of companies in various industry sectors and governmental agencies, including many of which are determined to be “Unintended Disclosure”. A check on privacyrights.org shows that if you look at both U.S. governmental and U.S. healthcare sites on the day I wrote this article, you will see that “Unintended Disclosure” (436 breaches totaling 27.7M records) is not too far behind “Hacking or malware” which accounted for 301 breaches totaling 50M records. Really? Over a third of the records from these 2 forms of data breach were from something as preventable as carelesssness on the programming or implementation side?  As a tech writer about things QA-related, I was intrigued.

Today I’d like to look at steps to avoid preventable loss. The insurance industry refers to “risk avoidance” and suggests audits to lessen the likelihood of the otherwise potential. For instance, in a pharma manufacturing tech writing job I once held, I worked on a Current Good Manufacturing Practices manual, which is required by FDA to assure quality standards. But the kind of QA we’re talking about here involves data warehouse testing of security for the Multidimensional Insurance Data Analytics System (MIDAS).  Never heard of it?  Well, you probably have heard of healthcare.gov which is informally referred to as Obamacare, which this collects qualification data for in a data warehouse format.  Please, let's avoid the strong political firestorm of opinions, both pro and con, about this program and the interests behind it that may be a tempting cause for many comments here – let's merely concern ourselves with the fact that apps that are in use should pass QA standards, especially when concerning personal information which impact privacy concerns.  You may recall the public problems that the healthcare.gov website had, calling attention to (thank you!) the need for proper software quality assurance and follow-through of their results in such endeavors.  Quite a boon to those of us in the QA community, we've never felt so needed!

The collected qualification information is what is suspected to be the data fields.  If you go to the verify-information folder then the documents-and-deadlines subfolder, you will get a list of personal identification documents that are of the type that one would have privacy phishing concerns about, including: income (tax forms, bank account information, Social Security Numbers, income forms, business ledger info, documents related to unearned income, etc.), immigration status, citizenship, home/rental insurance, DMV info, mortgage info, marriage license, birth certificates, adoption paperwork, and veteran status.  Former Social Security Administration Commissioner Michael As true asserts in his Cleveland Times editorial that the fields include:

Details of an audit are in the public document https://oig.hhs.gov/oas/reports/region6/61400067.pdf, which I will now detail for you, the reader.  It is so readable that I am going to do the unthinkable for a tech writer: present large chunks of the material almost verbatim.  Honest, I'm not slacking off as a writer – I just think you'll appreciate it more this way and realize that I'm am not coloring the prose.  The report begins by explaining the noble purpose thusly:

Analytics and database systems that are not secured properly create vulnerabilities that could be exploited by unauthorized individuals to compromise the confidentiality of personally identifiable information (PII) or other sensitive data. Data and systems security is a top oversight priority … MIDAS is a central repository for insurance-related data intended to provide ... metrics to the Department of Health and Human Services for various initiatives mandated by the Patient Protection and Affordable Care Act. The MIDAS collects, generates, and stores a high volume of sensitive consumer information, and it is critical that it be properly secured. Therefore, we performed the audit ... Our objective was to assess whether [they] had implemented information security controls to secure the PII related to the MIDAS and a certain number of its supporting databases.

Okay, so we know why it exists, and see a secondary mission to ensure that what exists reaches a specific level of quality.  It goes on to discuss the results of the audit which took place between August and December of 2014:

Although CMS [Centers for Medicare and Medicaid Services] had implemented controls to secure the MIDAS and consumer PII data in the systems and databases we reviewed, we identified areas for improvement in its information security controls. At the time of our fieldwork, CMS: had not disabled unnecessary generic accounts in its test environment [meaning that users using the system may not be properly identified]; had not encrypted user sessions [meaning that a recorded user session had would be readable if hacked]; had not conducted automated vulnerability assessments that simulate known attacks which would have revealed vulnerabilities (e.g., password weaknesses and misconfigurations) specific to the application or databases that support the MIDAS; and used a shared read-only account for access to the database that contained the PII [again, improper tracking of who accesses what data].  In addition to the information security control vulnerabilities mentioned above, our database vulnerability scans identified 22 high, 62 medium, and 51 low vulnerabilities. We made related recommendations to address the issues we identified.

How would these suggestions be received?  I am glad to report that there is a happy ending, and that everything was repaired before this report became public.

CMS worked with [them] during the security testing and within a week of the findings being identified, CMS had addressed all the high vulnerabilities identified. CMS had addressed a majority of the remaining findings within 30 days of identification. All of [the] findings in this report were addressed by February 2015. In addition, all of the recommendations in this report were fully implemented prior to the draft report [stamped May 8, 2015] being issued.

Author Bio:

Scott Andery is an expert marketer and author who specialize in software testing tools and resources.



Retrieved from "http://aboutus.com/index.php?title=software-testing&oldid=71814042"