We provide a systematic review of epidemiological surveys of autistic disorder and pervasive developmental disorders (PDDs) worldwide. A secondary aim was to consider the possible impact of geographic, cultural/ethnic, and socioeconomic factors on prevalence estimates and on clinical presentation of PDD. Based on the evidence reviewed, the median of prevalence estimates of autism spectrum disorders was 62/10 000. While existing estimates are variable, the evidence reviewed does not support differences in PDD prevalence by geographic region nor of a strong impact of ethnic/cultural or socioeconomic factors. However, power to detect such effects is seriously limited in existing data sets, particularly in low-income countries. While it is clear that prevalence estimates have increased over time and these vary in different neighboring and distant regions, these findings most likely represent broadening of the diagnostic concets, diagnostic switching from other developmental disabilities to PDD, service availability, and awareness of autistic spectrum disorders in both the lay and professional public. The lack of evidence from the majority of the world's population suggests a critical need for further research and capacity building in low- and middle-income countries.
The digitization of practically everything coupled with the mobile Internet, the automation of knowledge work, and advanced robotics promises a future with democratized use of machines and wide-spread use of robots and customization. However, pervasive use of robots remains a hard problem. Where are the gaps that we need to address in order to advance toward a future where robots are common in the world and they help reliably with physical tasks? What is the role of computation along this trajectory?In this talk I will discuss challenges toward pervasive use of robots and recent developments in algorithms for customizing robots. I will focus on a suite of algorithms for automatically designing, fabricating, and tasking robots using a print-and-fold approach. I will also describe how computation can play a role in creating robots more capable of reasoning in the world. By enabling on-demand creation of functional robots from high-level specifications, we can begin to imagine a world with one robot for every physical task. Bio:Daniela Rus is the Andrew (1956) and Erna Viterbi Professor of Electrical Engineering and Computer Science and Director of the Computer Science and Artificial Intelligence Laboratory (CSAIL) at MIT. Rus's research interests are in robotics, mobile computing, and data science. Rus is a Class of 2002 MacArthur Fellow, a fellow of ACM, AAAI and IEEE, and a member of the National Academy of Engineering. She earned her PhD in Computer Science from Cornell University. Prior to joining MIT, Rus was a professor in the Computer Science Department at Dartmouth College.
And yet, too many women are still denied these basic human rights. Too often, their liberty and dignity are compromised. And, too many of them are subjected to violence. Violence against women is perhaps the most shameful human rights violation. And, it is perhaps the most pervasive. It knows no boundaries of geography, culture or wealth. As long as it continues, we cannot claim to be making real progress towards equality, development and peace.
Before we even try to answer these questions, we need to establish how pervasive corporate fraud is. Is the fraud we observe the whole iceberg or just its visible tip? The answer to this question requires an estimate of the ratio of the exposed tip to the submerged portion, also known as the detection likelihood. Thus far in the literature, approaches to estimating the detection likelihood (and by implication the hidden prevalence of fraud) have included (i) accounting misconduct prediction models (Beneish 1999; Dechow et al. 2011), (ii) corporate surveys (Dichev et al. 2013), and (iii) structural, partial-observability approaches (Wang 2013; Wang et al. 2010; Hahn et al. 2016; Zakolyukina 2018). In this paper, we introduce a fourth approach, which relies on a natural experiment: the 2001 demise of Arthur Andersen (AA).
The simple idea is that after the AA demise, former AA clients were subject to vastly increased scrutiny. They found themselves in the spotlight of the media, investment intermediaries, short-sellers, and their internal gatekeepers. In addition, they were forced to seek a different audit firm. Given the extreme cloud of suspicion that was covering AA clients immediately after the Enron scandal exploded (Chaney and Philipich 2002; Krishnamurthy et al. 2006), the new auditors, as well as all other fraud detectors, had strong incentives to uncover any fraud committed by former AA clients. Even if this increased scrutiny does not reveal all the existing fraud, the Kolmogorov axiom of conditional probability allows us to derive an upper-bound (and thereby conservative) estimate of the detection likelihood, which in turn provides us with a lower-bound estimate of the pervasiveness of corporate fraud.
With our estimates of the detection likelihood, we can calculate the pervasiveness of corporate fraud as a function of the definition of fraud we adopt. Accounting violations are widespread: in an average year, 41% of companies misrepresent their financial reports, even when we ignore simple clerical errors. Fortunately, securities fraud is less pervasive. In an average year, 10% of all large public corporations commit (alleged) securities fraud, with a 95% confidence interval between 7 and 14%.
The rest of the paper proceeds as follows. Section 2 describes the data and presents summary statistics on caught frauds. Section 3 explains our methodology. Section 4 presents the detection likelihood and the pervasiveness of corporate fraud results. Section 5 provides estimates of the cost of such fraud, and Sect. 6 concludes.
A more general concern is that the increased attention on corporate fraud that followed the Enron and WorldCom scandals might have prompted the SEC (or the other auditors not affected by the turnover) to become more active in detecting fraud. In the limit, if this enhanced scrutiny exposed all fraud in all firms, there will be no difference between the amount of fraud revealed in AA clients and the amount of fraud revealed in clients of other audit firms, invalidating our experiment. Yet, as long as some enhanced scrutiny affects all firms but AA firms are affected more, our methodology will work, but will underestimate undetected fraud. Thus, our results should be interpreted as a lower bound on the pervasiveness of fraud.
Note also that we derive the detection likelihood by comparing fraud detection in two groups (former AA clients and non-AA clients) at the same time. Thus, this estimate should not be affected by fluctuations in the probability of committing fraud, as long as these fluctuations are similar in the two groups. Yet, these fluctuations will matter for the level of fraud pervasiveness at any point in time, since this is likely to vary over the business cycle as shown by Wang et al. (2010).
We now can use the detection likelihood estimates in Eq. (1), \(Pr\left(F\right)=\) \(\frac\boldsymbol P\boldsymbol r\mathbf\left(F,caught\right)F\right)\), to estimate the overall pervasiveness of corporate fraud. The numerator in Eq. (1) is the observable incidence of fraud that is caught. The denominator in Eq. (1) is the detection likelihood. Table 6 reports observed caught frequencies in Panel A, detection likelihood estimates from Table 3 in Panel B, a baseline estimate of the pervasiveness of fraud across the measures of misconduct and alleged fraud in Panel C, and a best estimate of the pervasiveness of fraud across the measures of misconduct and alleged fraud using the best estimate detection likelihood in Panel D. Since AA firms during the detection period are assumed to have a probability of detection equal to one, we exclude them from Panel A, which computes the frequency of caught frauds under normal circumstances.
Focusing first on Panel A, recall that Fig. 1 showed that observed incidences of misconduct vary widely depending on the definition of corporate fraud and the time period of reference. Since fraud may be cyclical (Wang et al. 2010), we do not want to rely on a specific point in time, instead preferring to estimate pervasiveness over a full cycle of boom and bust years. The start in January 1998 and the end point in December 2005 are each almost exactly halfway through the respective expansion periods; thus the period covers one full business cycle from mid-point to mid-point.Footnote 14
Our findings as to the pervasiveness of corporate fraud depend on the measure of misconduct we use. We find that in any year averaged across the business cycle, 2.5% of large corporations are committing severe financial misreporting that auditors can detect. Auditor-detected securities fraud is a subcategory of SCAC alleged securities fraud; thus, it is not surprising that it has a low frequency. Such a measure is interesting for the detection likelihood estimation, given that it maps well to our AA demise design, but of more interest to speak to fraud at large are the SCAC securities frauds.
We find that, during an average year over the business cycle, 10% of large corporations are committing a misrepresentation, an information omission, or another misconduct that can lead to an alleged securities fraud claim settled for at least $3 million (with a 95% confidence interval between 7 and 14%). This result from the SCAC data is our main estimate of the pervasiveness of corporate fraud, since SCAC cases are indeed (alleged) securities fraud. By using the AAER measure, we arrive at similar estimates: 8% of fraud pervasiveness, with a 95% confidence interval range of 6%-11%. This magnitude is similar to the SCAC estimate, even though AAERs have lower observed frequencies because their existence requires the SEC to act. (Recall that the SEC failed to act on Madoff despite six substantive complaints.Footnote 15) 041b061a72