How it works
The program connects to SQS and opens the queue. Opening the queue for reading is also done using sqs.create_queue, which will simply return the queue if it already exists.
Then, it enters a loop calling sqs.receive_message, specifying the URL of the queue, the number of messages to receive in each read, and the maximum amount of time to wait in seconds if there are no messages available.
If a message is read, the URL in the message is retrieved and scraping techniques are used to read the page at the URL and extract the planet's name and information about its albedo.
Note that we retrieve the receipt handle of the message. This is needed to delete the message from the queue. If we do not delete the message, it will be made available in the queue after a period of time. So if our scraper crashed and didn't perform this acknowledgement, the messages will be made available again by SQS for another scraper to process (or the same one when it is back up).