Responses that looked like bot activity or not possible to interpret were classified as invalid. These were 20% of the total number of responses. This process resulted in 1,429 valid responses (44%) and 1,807 excluded responses (56%) where the latter includes both missing responses (36%) and invalid responses (20%). Filtering Responses In our analysis, we decided to include only the valid responses, effectively using our free-form question as a screening question. Generally, in the excluded subset, the distribution between options is more even, which is consistent with more random noise. However, it was not just noise and some proportion of those responses must be from real, experienced web developers. Unfortunately, it’s not possible to identify them without making some assumption about what a good response looks like, which would likely only bias the results to what we already think is reasonable. This represents a flaw in the survey design. The free-form question should have been required so that more of the responses could be identified as valid. This likely biased the results in some way. Because of this screening and that only 29% of the used responses came from MDN, the results can’t be directly compared to the 2019 MDN Developer Needs Assessment . Results The high-level survey results are summarized here. See appendix C for full results, and the Findings section for a synthesis of survey and interview results with much more granularity.
Made with FlippingBook - professional solution for displaying marketing and sales documents online