Description
We are already nearly to the end of Q2, so this is more a quick review than anything else. I want to get better at doing at least this high-level amount of review and planning.
Live tracking of work continues in this GH project board: Web Monitoring
🔄 = Carry-over from Q1 (#174).
Overall Status
Q1 was spent getting basic systems that had been turned off back online and getting a basic handle again on the infrastructure and projects. There are some major tasks I didn’t get done that are carried over (chiefly: automating crawls, since I think we have landed in a spot where we cannot comfortably rely on IA for this). As human analysts got up to speed, we’ve come up with a few new project ideas we want to explore, too (new public tracker, regulation research tools for analysts). This is a lot of balance.
Less will get done this quarter because I am traveling for half of it, and will probably not have a lot of time for stuff beyond basic maintenance during that period.
Critical Goals
-
🔄 Automate and operationalize our crawls (currently kicked off manually M/W/F):
- Keep seed lists updated.
- Crawl seeds with Browsertrix-Crawler.
- Resume crawls that died in the middle.
- Upload crawl output to S3.
- Upload crawl output to IA.
- Import crawl data to web-monitoring-db.
- Analyze logs for errors that need to be handled specially because the server is not responding over HTTP.
-
Public Tracker.
There are a bunch of things about the publishing process for EDGI’s current public web tracker that are problematic and need more automation, but the general presentation of it (as a Google Sheet in an iframe, with a separate “about” page) that are not great. We want to solve both these problems with a simple static site that gets generated automatically from a simplified Google Sheet.
-
Bring web-monitoring-ui up-to-date and resolve security vulnerabilities. (Update Core Frameworks and Tools web-monitoring-ui#1082)
This project is pretty messy, and I have a lot of complicated thoughts about it. I wrote last month that it would be nice to re-write this as a normal web front-end to the DB (maybe not even with React!) or at least to deploy it with the API server instead of on its own. I’m still not sure there’s time or bandwidth for that, but it has become apparent that this project is so behind on updates it’s hard to make safe fixes and updates to, and that’s a serious problem. Whatever its long-term future is, it at least needs some cleanup so it’s easier to tweak.
-
🔄 Bring web-monitoring-diff back up to date and off life-support.
- Wrap up the small refactor I started two years ago before everything shut down. It’s kind of in the way of other work. (Light cleanup on html_render_diff.py web-monitoring-diff#145)
- Replace cChardet, which is blocking us from compatibility with current versions of Python. (Replace cChardet with something compatible with current Python versions web-monitoring-diff#165)
- And/or support installations without cChardet. (Make cChardet and html5-parser (and lxml?) optional web-monitoring-diff#196)
Non-Critical Goals
-
Regulation research tooling
-
🔄 Clean up docs a bit; try and make them more current.
-
🔄 Cost analysis of deployment.
I listed this as critical in Q1, but it turns out we have more than enough money to cover current infra costs and I should not be overly worried about this (in fact, I should probably be spending a little more for better services, e.g. Elasticache). However, it would still be good to do a write-up and overall analysis of what we’re spending Optimizing costs, however, is a non-goal here unless there are some glaringly obvious and low-cost things.
-
🔄 Alert analysts of possible new pages (and maybe removed pages?). This has always been a significant need that’s never been solved well. Some ideas we could do independently or together:
- Keep track of seen URLs from all the links in all the pages we monitor. (Track seen URLs at all the domains we monitor #173)
- Keep track of
sitemap.xml
for sites that have them (e.g.epa.gov
). It would be neat to turn these into a git repo so they are also browsable through time, but that’s not the core need.
-
🔄 Automate checking for and recording network errors.
In Q1, we added the ability to record these in the DB, but there is no automated system that creates those records. That said, not having this automated hasn’t been a huge thorn in our sides (yet). It’s not totally clear how to do this in a way that feels reliable (was there a network failure because the IP we crawled from was blocked or was it really down?).
Other Ideas and Nice-to-Haves
Not top priority, but these things are on the brain.
-
Minimize web-monitoring-ui or even merge it into web-monitoring-db.
The way these are split up makes a lot of things very hard to do or work on, makes infrastructure harder (more things to deploy, more CPU + memory requirements), and invites all kinds of weird little problems (CORS, cross-origin caching, login complexity, etc.). We originally designed things as microservices because of the way the team was structured and the skills people had, but that turned out to be over-ambitious in practice (in my opinion). Today, it’s an even more active problem when just one person is maintaining them all.
We also had all kinds of ambitious ideas about the UI project giving analysts a direct interface to their task lists, being able to post their annotations/analysis directly to the DB, and so on. This never got done, and would require a lot more work, both in the UI, and in ways for the analysts to get their data back out or query it, before it would ever be better than the analysts working directly with spreadsheets as they do today. As things stand today, this stuff would be neat, but I don’t it is ever going to get done.
If we drop all these ideas, the UI really doesn’t need to be nearly as complex or special as it is. It also doesn’t need its own server. At the simplest, it could be served as a static site (via GH pages, from CloudFront/S3, or even just from the API server as an asset). It could also just be normal front-end code in the web-monitoring-db server, but that requires a lot more rewriting and rethinking (it does pave a nicer, more monolithic path back towards including annotations/analysis forms for analysts, though).
This would be some nice cleanup, but could turn into a big project. So a bit questionable.
-
Consider whether web-monitoring-db should be rewritten in Python, and be more monolithic. The above stuff about merging away web-monitoring-ui feeds directly into this. Web-monitoring-db is really the odd duck here, written in Ruby and Rails while everything else is Python (or JS if it’s front-end). This was originally done because the first stuff I helped out with at EDGI was Ruby-based, and I thought there was crew of Ruby folks who would be contributing. That turned out not to be true. I think Rails is fantastic, but the plethora of languages and frameworks here has historically made contributing to this project very hard. Rewriting it in Python also makes it easier to pull other pieces (e.g. the differ, all the import and processing scripts, all the task sheet stuff) together, and would reduce some code duplication.
I don’t expect this to go anywhere — this project is probably much too big and unrealistic at this point. But I want to log it.
-
Get rid of Kubernetes. It’s been clear to me for several years now that managing your own Kubernetes cluster is not worthwhile for a project of this size. (I’m not sure it’s worthwhile for any org that cannot afford a dedicated (dev)ops/SRE person to own it.) Managed Kubernetes (AWS EKS, Google GKE, etc.) is better, but also still tends to be more complicated and obtuse than an infrastructure provider’s own stuff (e.g. AWS ECS+Fargate).
This is also a big project on its own that probably won’t happen. Additionally, it’s possible it could be more expensive than the current situation (we have our services very efficiently and tightly packed into 3 EC2 instances, and you can’t make decisions that are quite as granular on ECS, for example), although there are other management tradeoffs.
Note that a simplified, more monolithic structure as discussed above also makes it easier to run this project on other systems/services/infrastructure types. BUT we are probably somewhat coupled to AWS at this point, where all our data is.
Metadata
Metadata
Assignees
Type
Projects
Status