Using R-Shiny to Teach Quantitative Research Methods

What and why

Over the past couple of years I have been developing a small suite of R-Shiny tools for teaching quantitative research methods. R-Shiny is an R library for writing interactive web pages with full access to the power of the R statistical programming language. The tools I have written include demonstrations of ideas, self-teaching exercises and assessments.

If you use R already, writing Shiny web pages is a relatively easy extension, though programming an interactive web page has some important differences from conducting a data analysis. R is very general and very powerful, so there are lots of possibilities. This is both a strength and a weakness: generality means that while lots of things are possible, many require extensive programming. Nonetheless, it is relatively quick and easy to create simple and robust tools.

This (relatively long) blog is based on an early draft of a paper summarising some of the main things I have learnt, and showcasing a handful of examples. I’m putting it out partly just to record and display what I’ve done, but also to solicit feedback, particularly about how best to use apps like this to good pedagogical effect.

Apologies for the length: I include a table of contents to help navigate.

Demos

Small examples

A very simple example at http://teaching.sociology.ul.ie:3838/apps/normsd/ uses Shiny input sliders to manipulate an R graph, showing a normal distribution. This illustrates the basic principles of Shiny.

I won’t cover the details of programming here, but this is the code for the first example:

ui <- fluidPage(
    headerPanel("Normal distribution: defined by mean and standard deviation"),
    mainPanel(
        plotOutput("normPlot"),
        sliderInput("mean", "Mean:", value = 0.0, min = -10.0, max=10.0, step=0.1),
        sliderInput("sd", "Standard Deviation:", value = 1.0, min = 0.0, max=10.0, step=0.1),
        p("Use the sliders to vary the mean (this affects where the distribution is centred) 
           and the standard deviation (this affects how spread out it is).")))

server <- function(input, output) {
    output$normPlot <- renderPlot({
        x <- seq(-10, 10, length=1000)
        y <- dnorm(x, input$mean, input$sd)
        plot(x, y, type="l", xlab="X", ylab="Probability Density",
             main="Normal Distribution", axes=TRUE, ylim=c(0.0, 0.4))
    })
}

shinyApp(ui, server)

As can be seen, there are two main elements, a UI and a Server element, which are combined into the app by the shinyApp(ui, server) statement. The UI defines the layout and content of the page, including a header, the plot, the inputs and an explanatory paragraph. The server element just draws the graph, drawing on the values input via the sliders. More complexity is common, but this is the core structure of a Shiny app.

A prettier example

A prettier and more complex example is at http://teaching.sociology.ul.ie:3838/apps/orrr/. This uses ggplot instead of base R graphics, makes some more calculations (presented in a little table), and uses radio buttons to switch between distributions. However, it is still a simple small example, where the user tweaks Shiny inputs and sees the R-produced outputs change in real time. Pedagogically, it is intended to allow exploration of the relationship between differences-in-proportions, relative rates and odds-ratios for binary outcomes.

More complex examples

A more complex example explores the leverage of outliers in linear regression: http://teaching.sociology.ul.ie:3838/influence/. It creates a basic data set, where Y depends on X, and draws the regression line over the scatterplot, both with and without an outlier whose values are set by two sliders. The basic data set stays constant (until the “Refresh” button is clicked), but as you change the values of the outlier the display (and the Cook’s Distance calculation) changes instantaneously. This involves a little back-end trickery to separate the stable from the variable data, but is relatively straightforward as all the data is ephemeral.

(Pedagogically, I think this example is more effective than the odds-rate/relative-rate one. See also http://teaching.sociology.ul.ie:3838/logitinfl/ for the same idea ported to the logistic regression context.)

Even more complexity: storing data

An extra level of power (with consequent programming complexity) is offered by storing data separately from the Shiny app, so that it persists (and may be updated) from invocation to invocation. This example is meant for class-room exploration of the binomial distribution: http://teaching.sociology.ul.ie:3838/heads/. Students are asked to toss a coin four times, and enter the result (in the order the coins fell). Everyone accesses the same app individually, but their input is combined. In a large enough class, the summary distribution will fairly rapidly build up to an approximation of the binomial distribution for N=4, and we should also see that the 16 different sample permutations occur with approximately equal frequency.

This requires a database that is updated by, but is independent of, the invocation of the app, as each student’s device (laptop, tablet or smart-phone) will access its own instance. In this case the data is stored in a small SQLite data table. SQLite provides much of the power of SQL without requiring a proper SQL database installation. Something like an SQL database is required to allow the possibility of multiple instances of the app writing to the database at the same time.

One constraint of this particular app is that the database needs to be re-set by the instructor before each class. Pedagogically, it can be twinned with http://teaching.sociology.ul.ie:3838/binsim/, which simulates the binomial distribution, but doesn’t store data persistently.

Tip: An additional issue for apps to be used in class is that, while nearly all students will have a device (smartphone, tablet or laptop) with net access, the networking might struggle. For demos, it’s good to have a local version on the teaching PC. This will be less of a problem in PC labs, where connection is not via wifi.

Self-learning exerises

Self-learning exerises are a particularly useful category of app. They present a problem or exercise to the student, with random values, and check the correctness of the result the student offers. If the checking can be complemented by showing the correct calculations, so much the better. In general self-learning exercises don’t need to store data persistently, but it can be nice to present a temporary history of the results achieved during the present session.

Simple: reading the Normal distribution

A simple example is at http://teaching.sociology.ul.ie:3838/so5041/ass2/q1/. This is aimed at facilitating learning how to read the table of the Standard Normal Distribution. The student is asked to calculate the probability of getting a value below a given X, for a normal distribution with a given mean and standard deviation. The graph is given as a visual aid, but is not essential. Students enter their calculations, and click “Submit Answer”. They will then be told whether they are correct or not, and can repeatedly submit results for the same question, or click “Start Again” to get new test values. At all points, they can click on the “Worked Example” tab, to see how the calculation should be done. Since this is self-learning, there is no problem with the fact that they can “cheat” by looking at the answer, as they can re-set the problem without limit.

More challenging: calculate confidence interval from data

A more challenging example is at http://teaching.sociology.ul.ie:3838/so5041/ass2/q5/. This requires the student to create a confidence interval around a mean, given 10 observations. Prior to asking the students to attempt the exercise, the whole process will have been worked through in the PC lab, using a spreadsheet, and using the table of the t-distribution.

One of the main concerns with self-learning exercises is that they should present a pedagogically useful task, and that the randomisation is guaranteed not to throw up values that make the task either meaningless or unduly difficult (or easy).

Another important consideration in respect of writing the worked example is to accommodate, where necessary, variations in the steps needed to solve the problem. In the first case above, which is intended to show how to read a table of the Standard Normal Distribution, the steps differ according to whether the X is below or above the mean, and this is accounted for in the programming. To see this, go to the app, click on the “Worked Example” tab, and click on “Start again” repeatedly.

Assessments: permanently storing individual data

I have developed a framework for running assessments, where every student gets the same questions but with unique numbers, with automatic marking and feedback. Assessments can be very like the self-learning exercises, but with two key differences: the student can’t see the correct result, and the results must be stored in a persistent manner.

Making sure that the range of possible random values always creates problems with the same basic level of difficulty is not inordinately difficult, but it takes some care. Typically, this involves ensuring that the numbers are random within bounds: for a question on the normal distribution, for instance, set the X value to fall at random within the range 0.1 to 3 standard deviations above the mean, rather than more freely; for a t-test construct it such that the t-stat falls in a given range (everyone rejects the null, for instance), etc. Design the question around the answer and the work required to find it, in other words.

Giving each student question values that differ from student to student but are fixed for each student (e.g., if they make a second attempt at the question) is a relatively easy problem: use their ID value to set the random seed. Better, combine their ID with other information (such as the question number, the module code, the year) to set the seed: the year in order that a student who is repeating the year will not get the same values, and the question number in order that different questions’ random values will not be related.

Verification of ID

Verifying the student’s identity (within reason) is important. It’s not necessarily possible to stop them getting unfair help, but it should be possible to stop other people submitting answers for them without their involvement or permission. A general solution to this would be to tie the Shiny server to a password-authentication system. This is provided for with the paid premium version of R-Shiny, but it is not easily possible for the free version. Instead, a workable strategy is to give each student a separate link, which they need to validate with their ID number. The links include a cryptographic hash, based on their ID (plus the module code and year, etc). When the student follows the link and enters their ID, the hash is recreated and checked against the hash in the link. The cryptographic hash means that it is effectively impossible for others to guess a student’s link, so as long as the links remain private each student controls their own assignment.1

Once the links are generated, they can be distributed to students by e-mail. The Sakai Post-Em facility (the UL SULIS learning management system is a Sakai implementation) can do this quite neatly (feed it a spreadsheet with ID in one column and the link in another; presumably analogous functionality exists in other learning management systems). See an example in Table 1.

Table 1: Students’ individual cryptographic tokens for accessing assessments
ID Name link
12345 Sleepy http://teaching.sociology.ul.ie:3838/exass/?key=fa7013a5daa77ed7605495312a2d73ce
54321 Doc http://teaching.sociology.ul.ie:3838/exass/?key=236f13ccd50f0e573b1f743708668fc6
77777 Sneezy http://teaching.sociology.ul.ie:3838/exass/?key=9ed6336de5f2e4a6caa30a757ff103b8

A small example

The links above take us to a specimen assessment exercise, with three questions. To validate the link the student enters their ID, and the questions appear fully, plus the most recent responses. Click on one of the links, enter the corresponding ID, and try the questions.

How to assess

While the structure of an assessed exercise will be very like a self-learning exercise, we need to add several elements:

  • persistent data storage
  • functionality to allow multiple entries and take the best answer, if relevant
  • marking the questions and summing the score

Like the self-learning exercises, assessments need code:

  • that does the correct calculation, and assigns marks
  • that does feedback (though to be delivered differently)

Storing data persistently

There are endless possibilities to store data persistently, from writing CSV files for each attempt by each student, all the way to having R-Shiny interact with a fully-fledged database. The latter is a good idea where large numbers of students are likely to be working on the assessment simultaneously, but the former will work for low to medium numbers. A good compromise is to use sqlite3, which is in effect a lightweight database program that writes to a local file: the setup is more straightforward and copying the database is easy, but you can still use SQL to interact with the database. In my framework, I have written a number of R functions such that each time a student opens the assignment, a new row is added to the data base with the ID and time-stamp, and as they work the their answers for that session are stored there. The students’ latest answers are shown to them student as they work. (Experiment with the examples linked to from Table 1.)

Assessment question structure

There are many ways to write the assessment code. This is an area where the power of R means many solutions are possible, but also that any specific solution will be relatively complex.

In my current working framework, a main file sets up the page, and defines functions for reading and writing the persistent data to the SQL database. It then reads in a subfile per question. These question subfiles define a number of attributes: a caption, a list of one or more inputs for answers. There is also a marker to say whether or not a graph, and/or a data-table, is to be shown. Each question also contains a number of user-written R functions:

  • to create the data for the question
  • to insert values in the question text and to write it
  • to show any required graph or table
  • to calculate the correct answers for the question (defining a tolerance), and ideally a short feedback text showing how the question is correctly answered

The functions for data creation, question text and graph/table printing will be used by R-Shiny to create the online problem, and the same data creation and answer functions will be used by a separate R program used to evaluate the submissions. The functions to talk to the SQL database will also be used both by R-Shiny and the subsequent evaluation program.

Feedback

The feedback program uses the functions in the question subfiles (reading the database, comparing the results to the correct answers and creating a short feedback text) to construct a CSV file containing for each person, for each question, the answer given, the correct answer, the points awarded if correct, and the short worked answer text. Varying levels of sophistication are possible. A good strategy is to check if a question is answered multiple times, and select the best answer given (this needs to be avoided for multiple choice questions). Another value-added strategy can be too look for common errors (for instance, using z instead of t in a confidence interval calculation), and give partial credit.

However, the code to do this is relatively complicated and fragile (for instance, adding or deleting a question requires changes in many locations), and thus far I have been using it with careful checking. In this much, making the code increasingly sophisticated makes it even more complex.

The feedback code also depends strongly on the precise structure of the questions. I expect this to settle down over another two or three semesters.

For an example of feedback, see the following table (this would be one row of a spreadsheet, with a row per question per student).

    id   question                   answersubmitted       correctanswer   outcome     point 
 12345   Qn 1 P between X1 and X2              0.11   0.254685736884538   Incorrect       0 
 explanation                                                                                
 z1 = (X1-mean)/SD = (92.0-102.0)/16.3 = -0.613                                             
 z2 = (X2-mean)/SD = (103.0-102.0)/16.3 = 0.061                                             
 From the table of the standard normal distribution (or a computer function):               
 – proportion below z1=-0.613 is 0.270 (equals proportion above -z1, if z1<0)               
 – proportion above z2=0.061 is 0.476                                                       
 P = 1 – 0.476 – 0.270 = 0.255                                                              
 questiontext                                                                               
 If a normal distribution has mean and standard deviation respectively                      
 102.0 and 16.3, what proportion of the distribution lies between                           
 92.0 and 103.0?                                                                            

Reflection

How to write effective apps is not always clear, for both demos and self-learning exercises. This requires serious pedagogical thinking, and creativity, to select key concepts the understanding of which is important in the domain being studied, and thinking hard about how to present them, and whether the interactivity can help learning. What are the typical gaps in understanding, what are the common pitfalls, and how can we address them?

My impression of students’ use of self-learning apps is that it works very well. The tasks are closely aligned to what we cover in class, because they are derived from assessments – as teachers, we have plenty experience in setting assessments to match what is being taught. Because of the close match with assessment, students engage with it and see exactly where they went wrong if they didn’t get the right answer. My impression is that engaged students get full or near full marks on most questions, because of this opportunity for independent interactive learning. In fact, this prompts me to reflect more carefully on the assessed tasks themselves: if we can get people to complete them more effectively, perhaps we need to be even more careful in thinking about what they are supposed to achieve (i.e., what are the desired “learning outcomes” and how do the tasks achieve them).

The key advantage of these apps is computer-supported practice, where students can interactively and independently acquire relevant technical skills, profiting from immediate feedback of a nature that a textbook can’t provide. It also provides a framework where students can be set more frequent but smaller homework tasks, synchronous with the learning done in class, which can be more beneficial than a smaller number of larger assessments. Quick feedback is reassuring for students, and it also makes the instructor aware of emerging problems sooner.

Required skills for implementation

If you know R, you can leverage R-Shiny immediately, though an interactive web page is a different animal from a sequential data analysis and some learning is necessary. Because R and Shiny are very flexible, there are lots of possibilities. On the downside, this flexibility opens the risk of writing increasingly complex and idiosyncratic code.

In particular, processing assessment results is tricky, as tricky as you
might want to make it: selecting best answer, coping with potentially
ambiguous answers (e.g., where yes/no/maybe judgment is required),
detecting common mistakes.

IOW you want to have good data analysis programming skills to engage
with this, but if you’re teaching QM, perhaps you do have them.

Technical Challenges

I have found it reasonably easy (though time-consuming) to write the apps, and to develop the assessment framework that I use. However, I am aware of many remaining technical challenges, including the following:

  • Device compatibility: something that looks OK on big and small screens
  • Aesthetics and ergonomics: make it look good, be as easy to use as possible, be pedagogically effective
  • Protecting the server from attack or overload
  • Making the data storage for assessments more secure
  • Making the assessment and feedback code more robust
  • Making feedback more digestible

Shiny server

The Shiny apps run on a specialised server. If you can access a computer that is visible on the wider net (most PCs in UL are only visible internally), you can install your own instance. This is not terribly difficult but it requires some minor non-standard IT skills. If you install your own instance, the free version has almost complete functionality, but the paid premium version has some extra features, plus support. See https://rstudio.com/products/shiny/download-server/.

It is also possible to host apps on https://www.shinyapps.io/, which is hosted by RStudio (the organisation behind Shiny, tidyverse and the RStudio R-interface). There are various offers, between free and very expensive.

Footnotes:

1

This is nearly as good as password control, as it pairs one known token (the ID number) with one unknown one (the secret link or the secret password). However, if the code to create the hash is known it becomes trivially easy to create. Hence, we add a secret “salt” to the hash.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.