The growth of Covid-19 cases in the USA and around the world is truly startling. Here are some graphs some data to illustrate how pervasive the disease is in the US. The picture that it paints is not hopeful.
The same data, but presented for each county in the USA, illustrates how to date at least the spread has been concentrated in urban areas.
The sheer number of cases is, however, influenced by population. If we convert the number of cases into a per capita rate, the picture looks a bit different.
One problem with using the maps as presented above is that some of the nuances are lost by virtue of collapsing all of the data into 8 categories. The following two maps rectify that limitation to some degree. The situation in New York, New Jersey, and Louisiana is dire.
And despite the efforts of many to try and flatten the curve, at least so far the curves all seem to be accelerating -- not slowing. Here is the growth curves -- plotted against the date that the first case was observed -- for all 50 states grouped according to the Geographic division into which each state is placed.
While the growth rates are more serious in some states as compared to others, none of them appear to be close to tipping over and beginning to slow down.
Yesterday I was delighted to get a year-end surprise when I heard that a manuscript some colleagues of mine and I published earlier this year was recognized as a "paper of the year" by the American Journal of Health Promotion. The manuscript, Measuring Participation in Employer-Sponsored Health and Well-Being Programs: A Participation Index and Its Association With Health Risk Change proposes a new, more granular way of looking at the concept of participation. Rather than a simple binary approach where one either does, or does not, participate in a wellness initiative, this manuscript demonstrates that a continuous view that measures the degree of participation can provide better insight into outcomes.
Many thanks to my colleagues on the paper, Erin L. D. Seaverson, MPH, Stefan B. Gingerich, MS, and David R. Anderson, PhD.for their work on conceptualizing and pulling this paper together, and to the editorial team at the American Journal of Health Promotion, with Paul E. Terry, PhD as its Editor in Chief. Thank you for this wonderful honor.
It is always nice to hear that a manuscript that has been submitted for publication has been approved. Just a few minutes ago word came in that "Development and Validity of a Workplace Health Promotion Best Practices Assessment" has been accepted for publication by the Journal of Occupational and Environmental Medicine. Thanks to the many colleagues at HERO who worked on developing this piece.
I'm working on this project with HERO, and thought that I'd share their press release here on my blog.
(For immediate release)
HERO LAUNCHES RESEARCH STUDY TO EXAMINE POTENTIAL IMPACT OF WORKPLACE CULTURE AND WELL-BEING ON EMPLOYEE ENGAGEMENT AND RETENTION
HERO Scorecard Engagement and Retention Study to examine influencers of employee turnover and employee perceptions of organizational support
WACONIA, MN (February 15, 2017) — The Health Enhancement Research Organization (HERO) is going where no organization has gone before by investigating the association between employer health and well-being practices, workforce turnover, and employee perceptions of organizational support.
This will be accomplished by examining six domain scores on the HERO Health and Well-Being Best Practices Scorecard in Collaboration with MercerÒ (HERO Scorecard) as predictors of outcomes related to retention rates and perceived organizational support for companies who have completed the HERO Scorecard.
According to Jessica Grossmeier, Ph.D., vice president of research for HERO, past studies have demonstrated a correlation between companies that perform well on the HERO Scorecard and those that demonstrate strong financial performance, as well as a connection between best practices and health care costs. More recent analyses conducted by HERO Scorecard collaborator, Mercer, identified a relationship between HERO Scorecard scores and employer-reported turnover rates.
This newly launched study, which has been named the HERO Scorecard Engagement and Retention Study is based on data collected from HERO Scorecard completers between 2014 and 2017. The HERO Scorecard is a free, online tool for employers of all sizes that allows them to assess their wellness program initiatives based on a defined set of industry best practices for improving employee well-being. Companies that complete the HERO Scorecard receive a score for each best practice area, as well as a cumulative score. They also can access national benchmarking data to see how their program compares to other organizations completing the Scorecard.
“For several years now, employers have been chasing what we call ‘Big E’ engagement because of the positive influence it can have on an organization. This study will get us closer to understanding this relationship and how employers can influence the situation by looking specifically at the impact workplace well-being programs and best practices have on engagement,” said Paul Terry, Ph.D., president and CEO of HERO. “This is tangible information employers can use to increase the value of their well-being initiatives.”
Some of the best practice areas that are defined in the HERO Scorecard and the impact of which may be measured in the HERO Scorecard Retention and Engagement Study include:
According to Grossmeier, the study will be completed in June 2018 with findings being released shortly thereafter. HERO will conduct the study in collaboration with Pro-Change Behavior Systems, which will serve as lead data analyst; MRA, Inc., which will act as consulting statistician; and the Institute for Positive Organizational Health, which will conduct a literature review to inform the study. Research consultants were selected after a competitive RFP process and peer review. The HERO Research Committee will provide oversight of the study.
For more information about HERO research visit www.hero-health.org.
For more information:
Barbara Tabor, HERO / (+1 651-450-1342) / email@example.com
About HERO – Based in Waconia, MN, HERO (the Health Enhancement Research Organization) is a not-for-profit, 501(c)3 corporation that was established in 1997. HERO is dedicated to identifying and sharing best practices that improve the health and well-being of employees, their families and communities. To learn more, visit www.hero-health.org. Follow us on Twitter @heroehm, Facebook, or LinkedIn.
About Pro-Change Behavior Systems – Pro-Change Behavior Systems is celebrating its 20th year as an internationally recognized research and development company comprised of behavior change scientists and software developers dedicated to the systematic implementation of best practices of behavior change in the development and evaluation of well-being solutions. To learn more, visit www.prochange.com.
About Mangen Research Associates, Inc. – Mangen Research Associates Inc. (MRA) is a statistical consulting firm that specializes in developing data-based solutions to address a variety of management information needs. Founded in 1984, MRA primarily serves clients in the e-commerce, financial services, medical technology and health care information fields. To learn more, visit www.mrainc.com.
About the Institute for Positive Organizational Health – The Institute for Positive Organizational Health is a collaborative of researchers and consulting partners with a shared purpose of creating more flourishing individuals, organizations, communities, and natural environments. We provide resources, conduct research, and consult with employer organizations to help create more compassionate cultures. To learn more, visit www.culturecolab.org.
I just received this email regarding an excellent workshop on causal inference. These folks do a fine job and I'm happy to promote their workshop.
2018 Northwestern-Duke Main and Advanced Causal Inference Workshops
[please recirculate to others who might be interested]
Northwestern University and Duke University are holding our “main” week-long workshop on Research Design for Causal Inference – our ninth annual workshop -- at Northwestern Law School in downtown Chicago. We invite you to attend. Our apologies for the length of this message.
Main Workshop: Monday – Friday, June 18-22, 2018
We will also be holding an “Advanced” Workshop the following week:
Advanced Workshop: Monday – Wednesday, June 25-27, 2018
Both workshops will be taught by world-class causal inference researchers. See below for details. Registration is limited to around 100 participants. In the past we have filled the main workshop quickly. So please register soon.
For information and to register: www.law.northwestern.edu/research-faculty/conferences/causalinference/
Bernard Black (Northwestern University)
Bernie Black is Nicholas J. Chabraja Professor at Northwestern University, with positions in the Pritzker School of Law, the Institute for Policy Research, and the Kellogg School of Management, Finance Department. Principal research interests: health law and policy; empirical legal studies, law and finance, international corporate governance. Web page with link to CV: www.law.northwestern.edu/faculty/profiles/BernardBlack/. Papers on SSRN: http://ssrn.com/author=16042.
Mathew McCubbins (Duke University)
Professor of Political Science and Law at Duke University, with positions in the Political Science Department and the Law School, and director of the Center for Law and Democracy. Principal research interests: democratic institutions, legislative organization; behavioral experiments, communication, learning and decisionmaking; statutory interpretation, administrative procedure, research design; network economics. Web page with link to CV: www.mccubbins.us. Papers on SSRN: http://ssrn.com/author=17402.
Main Workshop Overview: Research design for causal inference is at the heart of a “credibility revolution” in empirical research. We will cover the design of true randomized experiments and contrast them to natural or quasi experiments and to pure observational studies, where part of the sample is treated in some way, the remainder is a control group, but the researcher controls neither the assignment of cases to treatment and control groups nor administration of the treatment. We will assess the causal inferences one can draw from a research design, threats to valid inference, and research designs that can mitigate those threats.
Most empirical methods courses survey a variety of methods. We will begin instead with the goal of causal inference, and emphasize how to design research to come closer to that goal. The methods are often adapted to a particular study. Some of the methods are covered in PhD programs, but rarely in depth, and rarely with a focus on credible causal inference and which methods to use with messy, real-world datasets and limited sample sizes. Several workshop days will include a Stata “workshop” to illustrate selected methods with real data and Stata code.
Advanced Workshop Overview: The advanced workshop provides in-depth discussion of selected topics that are beyond what we can cover in the main workshop. Principal topics for 2018 include: Day 1 (Mon.): Principal stratification (generalization of causal-IV concepts and applications, including sample censoring through death or attrition. Day 2 (Tues.): Direct and indirect causal effects. Synthetic controls and other advanced “matching” approaches with emphasis on panel data sets. Day 3 (Wed.): Application of machine learning methods to causal inference.
Target audience for main workshop: Quantitative empirical researchers (faculty and graduate students) in social science, including law, political science, economics, many business-school areas (finance, accounting, management, marketing, etc), medicine, sociology, education, psychology, etc. –anywhere that causal inference is important.
We will assume knowledge, at the level of an upper-level college econometrics or similar course, of multivariate regression, including OLS, logit, and probit; basic probability and statistics including conditional and compound probabilities, confidence intervals, t-statistics, and standard errors; and some understanding of instrumental variables. Despite its modest prerequisites, this course should be suitable for most researchers with PhD level training and for empirical legal scholars with reasonable but more limited training. Even for recent PhD’s, there will be much that you don’t know, or don’t know as well as you should.
Target Audience for Advanced Workshop: Empirical researchers who are reasonably familiar with the basics of causal inference (from our main workshop or otherwise), and want to extend their knowledge. We will assume familiarity with potential outcomes notation, difference-in-differences, regression discontinuity, panel data, and instrumental variable designs, but will not assume expertise in any of these areas.
Main Workshop faculty (in order of appearance)
Donald B. Rubin (Harvard University, Department of Statistics)
Donald Rubin is John L. Loeb Professor of Statistics, Harvard University. His work on the “Rubin Causal Model” is central to modern understanding of when one can and cannot infer causation from regression. Principal research interests: statistical methods for causal inference; Bayesian statistics; analysis of incomplete data. Web page, with link to CV: https://statistics.fas.harvard.edu/people/donald-b-rubin; Wikipedia: http://en.wikipedia.org/wiki/Donald_Rubin
Justin McCrary (University of California, Berkeley, Law School)
Justin McCrary is Professor of Law, University of California, Berkeley. Principal research interests: crime and urban problems, law and economics, corporations, employment discrimination, and empirical legal studies. Web page with link to CV: http://www.econ.berkeley.edu/~jmccrary/.
Jens Hainmueller (Stanford University, Department of Political Science)
Jens Hainmueller is Professor in the Stanford Political Science Department, and co-Director of the Stanford Immigration Policy Lab. He also holds a courtesy appointment in the Stanford Graduate School of Business. His research interests include statistical methods, political economy, and political behavior. Web page with link to CV: http://www.stanford.edu/~jhain//. Papers on SSRN: https://ssrn.com/author=739013.
Advanced Workshop Faculty (in order of appearance)
Donald Rubin (see above)
Fabrizia Mealli (University of Florence, Department of Statistics and Computer Science)
Fabrizia Mealli is Professor of Statistics at the University of Florence and external research associate at the Institute for Social and Economic Research (ISER) at the University of Essex. Her research focuses on causal inference and simulation methods, program evaluation, missing data, and Bayesian inference. She is a fellow of the American Statistical Association, and associate editor of Journal of the American Statistical Association (JASA), Biometrics, and Annals of Applied Statistics. Web page with link to CV: http://local.disia.unifi.it/mealli/
Yiqing Xu (University of California San Diego, Department of Political Science)
Yiqing Xu is Assistant Professor of Political Science at University of California, San Diego. His main methods research involves causal inference with panel data. Website: http://yiqingxu.org/.
Justin Grimmer (University of Chicago, Department of Political Science)
Justin Grimmer is Associate Professor of Political Science at the University of Chicago. His primary research interests include political representation, Congressional institutions, and text as data methods. Website:https://www.justingrimmer.org/
Main Workshop Outline
Monday June 18 (Donald Rubin): Introduction to Modern Methods for Causal Inference
Overview of causal inference and the Rubin “potential outcomes” causal model. The “gold standard” of a randomized experiment. Treatment and control groups, and the core role of the assignment (to treatment) mechanism. Causal inference as a missing data problem, and imputation of missing potential outcomes. Rerandomization. One-sided and two-sided noncompliance.
Tuesday June 19 (Justin McCrary): Matching and Reweighting Designs for “Pure” Observational Studies
The core, untestable requirement of selection [only] on observables. Ensuring covariate balance and common support. Subclassification, matching, reweighting, and regression estimators of average treatment effects. Propensity score methods. Methods that aim directly at covariate balance.
Wednesday June 20 (Justin McCrary): Instrumental variable methods
Causal inference with instrumental variables (IV), including (i) the core, untestable need to satisfy the “only through” exclusion restriction; (ii) heterogeneous treatment effects; and (iii) intent-to-treat designs for randomized trials (or quasi-experiments) with noncompliance.
Thursday June 21 (Jens Hainmueller): Panel Data and Difference-in-Differences
Panel data methods: pooled OLS, random effects, correlated random effects, and fixed effects. Simple two-period DiD. The core “parallel changes” assumption. Testing this assumption. Leads and lags and distributed lag models. When does a design with unit fixed effects become DiD? Accommodating covariates. Triple differences. Robust and clustered standard errors. Introduction to synthetic controls.
Friday morning June 22 (Jens Hainmueller): Regression Discontinuity
(Regression) discontinuity (RD) research designs: sharp and fuzzy designs; bandwidth choice; testing for covariate balance and manipulation of the threshold; discontinuities as substitutes for true randomization and sources of convincing instruments.
Friday afternoon: Feedback on your own research
Attendees will present their own research design questions from current work in breakout sessions and receive feedback on research design. Session leaders: Bernie Black, Mat McCubbins, Jens Hainmueller. Additional parallel sessions if needed to meet demand.
Stata and R sessions
On Tuesday, Wednesday, and Thursday, we will either run parallel Stata and R sessions to illustrate actual code to implement the designs discussed in the lectures, or build Stata code into the lecture slides.
Advanced Workshop Outline
Monday June 25 (Donald Rubin and Fabrizia Mealli): Principal Stratification and Censoring
Generalizing the causal-IV strata of compliers-always takers-never takers-defiers. Which treatment effects can be estimated for which strata? Handling missing data and censoring through “death” or attrition.
Tuesday June 26 morning (Donald Rubin and Fabrizia Mealli): Direct and indirect causal effects.
“Mediation” analysis: Direct and indirect causal effects versus principal associative and dissociative effects.
Tuesday June 26 afternoon (Yiqing Xu): Advanced matching
Advanced matching and reweighting methods, with an emphasis on panel data applications. Generalized synthetic controls. Relative strengths and weaknesses of different matching and reweighting approaches.
Wednesday June 27 (Justin Grimmer): Machine learning (predictive inference) meets causal inference
Introduction to machine learning approaches. When and how can machine learning methods be applied to causal inference questions.
Registration and Workshop Cost
Main Workshop: tuition is $900 ($600 for graduate students (PhD, SJD, or law) and post-docs. The workshop fee include all materials, temporary Stata 15 license, breakfast, lunch, snacks, and an evening reception on the first workshop day.
Advanced Workshop: tuition is $600 ($400 for graduate students (PhD, SJD, or law) and post-docs. There is a $100 discount for persons attending both workshops.
You can cancel from either workshop five weeks in advance (May 14 for main workshop, May 21 for advanced workshop) for a 75% refund and by three weeks in advance 50% refund (in each case, less credit card processing fee), but there are no refunds after that.
We know the workshop is not cheap. We use the funds to pay our speakers and for meals and other expenses; we don’t pay ourselves.
You should plan on full days, roughly 9:00-5:00. Breakfast will be available at 8:30.
Questions about the workshops
Please email Bernie Black (firstname.lastname@example.org) or Mat McCubbins (email@example.com) for substantive questions or fee waiver requests, and Laura Dimitrijevic (firstname.lastname@example.org) for logistics and registration.
Earlier today I came across this posting by Mark Perry, in which he quickly summarized some results of a study that compared the public perception of the profitability of companies versus the reality. The simple graph is quite telling and effectively communicates just how divorced public perception is from reality.
Now we all know that levels of profitability can be manipulated to some degree by clever accounting and effective use of the tax code, but this is really quite a difference.
Here is the link to the posting by Dr. Perry:
I just came across a very interesting chart developed by the researchers at the Urban Institute. It uses data from the credit bureaus to create a county-level map of debt in the USA. Here is a screen grab of the map for the country as a whole. Lighter shaded areas indicate less debt, while those with darker shades of blue indicate higher levels of debt.
One of the neat aspects of this map is that it is interactive when viewed at the Urban Institute site at https://apps.urban.org/features/debt-interactive-map/. You can zoom in on different geographies to examine the results in greater detail. Here I've zoomed in on Arkansas.
You can also click on any specific county, and get some comparative data on the the debt, income, and demographics for each county. Check it out!
Over the past several years, I have done quite a bit of work with clients that has used the Kano methodology. For those of you who are not familiar with this method, it is designed to test the attractiveness of a potential product feature by testing the new feature in comparison against the status quo. You can find out a bit more about the Kano method by going to this past of the website.
Lately, for some different clients, we’ve developed a new twist on the Kano methodology. I’ve taken to calling it the Contrarian Kano. It is applicable for use when the introduction of one feature may result in some consequences that are less attractive, and you want to test for the down-side risks that are associated with those less attractive consequences.
How might such a situation develop? We’ve seen two different models:
The implementation of the Kano proceeds identically to a normal Kano, with the feature presented in a straightforward fashion. At the analysis stage, we focus on looking at the number of Reversals to determine the degree to which the negative feature is producing substantial push back from the target audience.
A recent news brief in Science magazine (Volume 355, Issue 6320, page 16) highlighted concerns that many statisticians have regarding continued data availabilty from the constitutionally-mandated census as well as the American Community Survey (ACS). Efforts to gear up for the 2020 census are underway, and will require a significant funding authorization from Congress this year.
While eliminating the census is problematic -- simply because it is mandated by the constitution -- the 70 item ACS send to 3.5 million homes annually is perhaps in greater trouble. This study is the replacement to the old long-form census questionnaire, and is used to allocate almost $500 billion in federal program dollars. The proposed director of OMB is not a fan of the ACS; he voted to defund the study in the past.
I know that in my work I have often used census and related data from the Department of Commerce to conduct analyses to assist my clients. Defunding these efforts is not, in my opinion, a prudent step.
For more information see the original article in Science magazine.
I'd like to wish all of my friends a most joyous season, and let's all have a happy and prosperous new year. All my best!
David J. Mangen
I'll use this space to make some occasional comments about statistics, numbers and research issues as seen in the world today.