The wide range of digital devices and extraction processes [within digital forensics] yields a commensurate potential for recoverable evidence within the criminal justice system. I begin with this expression because while researching Digital Evidence and Computer Crime, I read that digital evidence can be used to reconstruct a crime scene or incident, identify suspects, apprehend the guilty, defend the innocent, and understand criminal motivations”, which heightened my awareness of the abilities of forensics. I wallowed in fictional and fabricated short stories and novels of investigation. I immersed myself consistently into TV dramas of criminal justice. However, what I see all too often is a lack of cohesion in the digital forensic world as detectives and CFS’ sort through evidence trying to avoid loopholes.
As a result of my readings I have arrived at the notion that multiple court trials that use multiple approaches for evaluation of evidence does have the potential to create environments of confusions. The field of digital forensics does not currently have mathematics or statistics to evaluate levels of certainty associated with digital evidence (Casey). This pertains to the fact that there is no set procedure available across the board used for evaluating evidence in the field of digital forensics. There are no generalized approaches or consistent studies that would generate statistics. This creates inconsistencies that result in unreliability. Any evidence that is perceived as unreliable creates a reasonable doubt when placed before a judge and a jury.
It is therefore my opinion that the weakness in the digital forensics industry lies in the inability to predict by using consistency that in turn create statistics. To satisfy these lapses, each and every case should be assessed individually incorporating a generalized approach. Beyond reasonable doubt is the highest standard of proof that must be met in any trial and if digital forensics could generalize its approach to surpass this standard then the industry will achieve a milestone, and digital forensics shall be viewed as an industry much more respected and appreciated.
-Dominique Briscoe, M.S.C.T
The intent of this article series is to provide an understanding to non-IT citizens who wish to understand how the Cybersecurity Excecutive Order affects them and what is entailed in this order. This is a 5 series compilation of articles for the reason of expanding on each section with careful reasoning.
Pressing Forward into Section 2!
To the modern-day end user, the view of an infrastructure can be large or small. So, let’s expand our horizon now a little farther than infrastructures of small business, local government, and even some corporate companies. What Section 2 of the CS EO focuses on is handling security for those systems that are deemed critical to the safety of the country. We are talking about the infrastructures that affect our national security and our economic well-being. In fact, it would be better to think of these systems as our asset systems since they have such a huge effect on our lives. We’re talking about electrical generation, telecommunication, water supply, transportation, and so forth.
Anyway, I think you get my drift of the critical state of the systems that are referred to in Section 2 of the CS Executive Order. If you have followed my blog and my previous article, please do notice that these are the systems that we were referencing in the Section 1, as mentioned in part I. Only this time this entire section is dedicated to upkeep of this critical infrastructure instead how the federal government will operate.
When we begin with section A of Section 2, we see that it is simply acknowledging the policy that gives the executive branch of the government authority to support and assist in creating secure risk management procedures for these critical infrastructure entities (or businesses).
Section B, acknowledges that infrastructures with the greatest risks will cooperate with some federal agencies (listed in EO) that are identified by Secretary of Homeland Security. The purpose of this collaboration will be to identify and list out strategies that these federal agencies can use to support these entities. The primary concern is protection of data. The federal agencies also must be sure that all compiled areas (ex: operational, budgeting) can work together to make the plan a feasible plan that is aligned with all respective processes. At this point of the EO the NIST framework is referenced and set as a requirement for these agencies to follow.
So, what is the purpose of the NIST framework?
One challenge that is commonly noted in IT and CS is the inability to collaborate on a generic basis. In other words, vendors all have differing frameworks and templates. This can create confusion and excess work. It is my opinion that the EO insisting upon the adoption of such a framework to parallel the agencies and the private sector companies that choose to use this approach, can be a viable approach to retaining a general understanding of all entities involved. It creates a generic route and if problems arise we have a generic template to utilize for a clear understanding of system configuration. In CS we would refer to this process as baselining. But of course, this can be discussed in other articles.
Anyway, back to the point, section B! So, in this request to use the NIST framework there is also a request for risk management reports within 90 days. The important aspects that need to be addressed in this report include presenting of system insufficiencies, addressing budgetary needs that have not been met, and identifying accepted risk, along with unmitigated vulnerabilities. It is also important that systems are reassessed periodically, flexibility is given for changes that may need to be made, and assuring that the presented policies of course are aligned with the NIST framework.
As we move on through sections B and into section C through the EO we see that Section 1 is very similar to Section 2. After specifics in dealing with the critical infrastructure is addressed, we see that these proposals once again sent through channels for approval. These channels include the Secretary of Homeland Security, the Director of OMB, and other executive offices of the federal government. The Director of American Technology Council prepares a report with such offices and it will again describe the effects of such a transition so that all considerations can be made, and the overall effect can be realistically viewed. It is not an easy task to parallel the communication for critical infrastructures within an entire nation such as ours, and it must be done carefully with guided expertise behind every decision.
In conclusion, Section B ends by delegating the Assistant to the President for National Security Affairs and the Assistant to the President for Homeland Security and Counterterrorism as those responsible for supreme implementation. This means they would implement, monitor, evaluate, and improve this critical infrastructure in accordance with this Executive Order.
-Dominique Briscoe, M.S.C.T
In the clear, IP Routing is the path taken for information to get from source to destination with the use of routing protocols. Many don’t realize that these routing protocols are not always well-protected against deliberate or accidental propagation of incorrect routing information. They function with the implicit trust in both their peers and in the information they receive. Neither trust is suited for the current Internet Environment (Badger, 1996). With this in mind we have to consider that the process of routing information can deemed unsafe. After careful research I was despondent to find the large supply of open vulnerability routers, especially with the selection of routers posing serious threats to the actual delivery of data and control plane packets. In this article I give a small overview of the actions and mishaps of both external and internal routing devices.
▪Internal routers- are commonly used to keep subnet traffic separated. In terms of attack types, DNS can be compromised and used to redirect the initial request for service, providing an opportunity to execute the man-in-the-middle attack. This only takes place in the case that another router is at the other end. As a manner of securing the internal router, these devices can use NAT (Network Address Translation) to improve security. NAT uses an alternate public IP address to hide the network’s real IP address. An attacker will have more difficulty identifying the layout of networks behind a firewall that uses NAT.
Second, internal routers always need to use an authority in the autonomous system to produce signed authorizations of the networks that a router would be able to announce (Badger, 1996). This is in protection of the internal router’s habit of announcing nonexistent host routes. This is especially necessary for large amounts of nonexistent host routes.
▪Autonomous System Boundary Routers- seem to be total opposite from internal routers in operation since they tend to make the announcement of nonexistent host routes to the other external end. Internal routers have digital signature protection but ASBR’s don’t. When dealing with external routers, every piece of routing information that is dealing with outside routes, forged or real, that is introduced in the domain cannot be verified and it is propagated to all OSPF areas of the domain that are not configured as stub areas (Jones, 2003).
Whether external or internal both types present risk to vulnerabilities. Routing information that incorrectly reports OSPF areas, or any other portion of the domain, as unreachable will deny services to all hosts connected to or exchange traffic with said areas. This practice opens the system up to network congestion, looping, eavesdropping, and overloading to name just a few effects. These practices of announcing nonexistent host routes tend to open the network up to man-in-the-middle, message deletion, message modification, and denial of service attacks.
Typically, what is used to address this vulnerability is simple password or cryptographic authentication methods. When using simple password authentication, the header is used to carry a plain text on each OSPF message. However, the downside to this approach is that no field of the IP header is protected by MAC available when cryptographic authentication is enabled.
Finally, understand that securing OSPF depends on how well it is configured and managed. To mitigate the risk managers should employ a method of “manual stops.” “A manual stop event causes the OSPF router to bring down all its adjacencies, release all associated OSPF resources, and delete all associated routes (Jones, 2003).
-Dominique Briscoe, M.S.C.T.
There is now in our present day the extreme need to increase the understanding among IT managers, policeman, lawyers, lawmakers, and even law breakers of what constitutes crime in the cyberspace business environment. The coming of computer technology has delivered a wide range of advantages and opportunities; some of these not surprisingly are criminal in nature (Thompson, 2015).
While reading an article about crimes enabled via internet, I was astonished at this statement because it was both concise and very true. What constitutes a cybercrime? A formal definition is criminal activities carried out by means of the internet. An informal and easy to understand definition would be crimes committed online.
Ok, so for the most part the United States has fully completed the task of assuring traditional crime criminals of what will or will not be tolerated. Over the years they have implemented laws that support this demand and enforce this demand. Has the same approach been taken with activities of cybercriminals? Many would argue that there are laws in place for these activities but I think that we can all agree that there is minimal implementation and media attention devoted to this cause. The question now is how should they be enforced?
First it is my opinion that people need to first view the nature of traditional crimes versus cybercrimes. For instance in the case of identity theft, we know that a thief would traditionally attain credit card information or even social security numbers. This person could be anyone with access to financial information. This could be a postal worker or even a neighbor checking your mailbox in your absence. Identity theft imposters don’t need mass education for what they do, they simply target you and plan ways to attain your identification and misuse your identity to their advantage.
Cybercriminals have the same intentions but utilize computer gimmicks that require a little more knowledge and expertise. For instance in August of 2008 and extreme occurrence of identity theft took place among an interracial party of people. They drove by, or loitered at, buildings in which wireless networks were housed, and installed sniffers that recorded passwords, card numbers, and account data. This was the result of inadequately securing a wireless network (Bosworth, 2014).
A continually occurring incident of identity theft is the fraud faced by social media users. It taught users that even on social media sites like Facebook they couldn’t let their guard down. The user’s identity is often stolen, and acquaintances sometimes emails requesting money with the use of the user’s known Facebook profile. The downside of using any social network is that with limited government oversight, industry standards or incentives to educate users on security, privacy and identity protection, and as a result users are exposed to identity theft and fraud. (Lewis, 2015).
To conclude, this is where lawmakers and law enforcers must step forward and uphold that which has been written to prosecute cybercriminals. Just as criminals resisted identity theft and came to understand that stealing mail for identity theft was unlawful, cybercriminals must now believe that this same law pertains to their actions and that even receiving traffic from a network unlawfully is punishable. Some of these investigations are difficult because they may involve jurisdiction issues. Jurisdiction issues can be confusing for law enforcement agencies that are not familiar with identity theft or do not have departmental procedures for receiving and investigating complaints of identity theft (Dadisho, 2015), but this is still not an accurate enough reason create a more solid approach to implement cybercrime penalties. Cybercriminals like any other criminals must understand that this will not be tolerated even in their “non-transparent” industry!
Many were amused and astounded by the creation of an EXECTIVE ORDER for CYBERSECURITY. Put in place initially by POTUS #44 Barack Obama, we later received an updated version as Donald Trump POTUS #45, signed his EO on May 11, 2017, extending the original EO for CS. But what does this “order” really entail anyway? Let’s face it, Cybersecurity is now a big factor for us and it is not going anywhere until we invest into making it important in our lives. In this article I will break down the first section of the EXECTIVE ORDER. Pay attention to the dates and along the way take note of whether all agencies have done their part in submitting documents within 90 days, which should be the initial start to improving our Cyberscurity strategies in the US!
Here we go!
Section I as mentioned before is the Federal Networks Section. Section A within Section I simply singles out the federal government as the first era of receivers to this order that have no choice but to implement this by following directions and making their submissions timely. Section B gives findings and suggestions. It suggests first that when information is shared it must support awareness, detection, mitigation, and recovery from unauthorized attacks. It suggests that until now all federal IT-defense has been outdated and that at this point risk management must do more than simply protect data but also emphasize future improvements and modernization. It suggests that one source of depletion is “known unmitigated vulnerabilities”. In other words, workers being careless about configuring security patches that lead to risks. It places an emphasis on following directions supplied by vendors when configuring, especially security.
Finally, the findings suggest that the teams lead by agency heads and that they must be comprised of different experts fluent in budgets, human resources, law, privacy, acquisition, and security with which I can thoroughly agree! This section was easy and delightful to read as it sheds some light on the President and agencies involved in comprising this order that, recognizing the US IT-defense as outdated was important.
As we move further into section I geared towards the federal government, risk management is discussed, and we begin the hear exactly how NIST framework plays a factor in this development. Risk management means forecasting and evaluating financial risks with the identification of procedures to minimize their impact. First agency heads are again recognized and identified as the party responsible for risk management. They are instructed to analyze risks of unauthorized access, along with data and IT use, disclosure, modification, disruption, destruction, and disclosure. In a nutshell, they are responsible for all activities of risk management.
The proposed NIST Framework is then solidified as a part of this package and not only recommended but required to be of use to agencies, and a report is requested with these results by August 11, 2017 (90 days). The detailed report shall consist of clarification of choices accepted by agencies as the factor for risk mitigation. But what does this mean? Sometimes agencies choose not to mitigate the risk, the simply choose to accept that they have the risk but not to act on it. This can be extremely risky and this order requests that for all accepted choices made, that the agency is able to expand on how it may affect the strategies, budgets, and operations. As a Cybersecurity Specialists, this is a big whoopsie for me. It bears caution but also understanding. No, we don’t have to address every single risk but they may affect us in the future.
The agency is then requested to submit an action plan for how they will implement the NIST framework in their departments. After this is submitted the Secretary of DHS will assess the agency plans put in place and determine whether the approaches presented along with the accepted risks are appropriate for managing the overall executive branch enterprise. So, here in this section we see the importance of proper management of executive branch enterprise. We can assume that any submitted report that poses too much of a risk to this enterprise would be rejected and revisions would be requested along with articulation for what changes would need to be made. Or at least I would hope that this would be the case.
Finally, these reports would reach the hands of Donald Trump through his assistant at DHS. This final report would be aligned with budgetary needs for risk management along with generating a consistent procedure for reassessment of budgetary needs in the future. The DHS Assistant would have worked in collaboration with the Secretary of Commerce, OMB Director, and AGS, to present a plan to maintain a more secure and resilient IT architecture.
This section concludes by bringing attention to the fact that Agency heads should become more mindful when selecting shared IT services such as cloud, email, and cybersecurity services. It also notes the intended receipt of such a report compiled by the Director of American Technology that regards modernizing IT, describing the legal and budgetary ramifications for transitioning all agencies and their subsets, and assessing the effects of transitioning the agencies.
In a nutshell, the first portion of the EXECUTIVE ORDER is geared towards the federal government in hopes that the private sector would follow suit. However, we must first must be efficient in emphasizing the order within the federal government and analyzing its benefits to the federal government. In concluding this article I ask, has the federal government been hacked lately?
-Dominique Briscoe, M.S.C.T.