Skip to content

Commit 0295eb5

Browse files
committed
update with DSv0.1 improvements
1 parent ad4dcc3 commit 0295eb5

File tree

8 files changed

+568
-110
lines changed

8 files changed

+568
-110
lines changed

config.py

Lines changed: 8 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,7 @@
11
# Constants (User configurable)
22

33
APP_NAME = 'DistributedCP' # Used to generate derivative names unique to the application.
4+
LOG_GROUP_NAME = APP_NAME
45

56
# DOCKER REGISTRY INFORMATION:
67
DOCKERHUB_TAG = 'cellprofiler/distributed-cellprofiler:2.0.0_4.1.3'
@@ -32,10 +33,14 @@
3233
# SQS QUEUE INFORMATION:
3334
SQS_QUEUE_NAME = APP_NAME + 'Queue'
3435
SQS_MESSAGE_VISIBILITY = 1*60 # Timeout (secs) for messages in flight (average time to be processed)
35-
SQS_DEAD_LETTER_QUEUE = 'arn:aws:sqs:some-region:111111100000:DeadMessages'
36+
SQS_DEAD_LETTER_QUEUE = 'user_DeadMessages'
3637

37-
# LOG GROUP INFORMATION:
38-
LOG_GROUP_NAME = APP_NAME
38+
# MONITORING
39+
AUTO_MONITOR = 'True'
40+
41+
# CLOUDWATCH DASHBOARD CREATION
42+
CREATE_DASHBOARD = 'True' # Create a dashboard in Cloudwatch for run
43+
CLEAN_DASHBOARD = 'True' # Automatically remove dashboard at end of run with Monitor
3944

4045
# REDUNDANCY CHECKS
4146
CHECK_IF_DONE_BOOL = 'False' #True or False- should it check if there are a certain number of non-empty files and delete the job if yes?

documentation/DCP-documentation/overview_2.md

Lines changed: 12 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -37,7 +37,7 @@ Any time they don't have a job they go back to SQS.
3737
If SQS tells them there are no visible jobs then they shut themselves down.
3838
* When an instance finishes a job it sends a message to SQS and removes that job from the queue.
3939

40-
## What does this look like?
40+
## What does an instance configuration look like?
4141

4242
![Example Instance Configuration](images/sample_DCP_config_1.png)
4343

@@ -65,4 +65,14 @@ How long a job takes to run and how quickly you need the data may also affect ho
6565
* Running a few large Docker containers (as opposed to many small ones) increases the amount of memory all the copies of your software are sharing, decreasing the likelihood you'll run out of memory if you stagger your job start times.
6666
However, you're also at a greater risk of running out of hard disk space.
6767

68-
Keep an eye on all of the logs the first few times you run any workflow and you'll get a sense of whether your resources are being utilized well or if you need to do more tweaking.
68+
Keep an eye on all of the logs the first few times you run any workflow and you'll get a sense of whether your resources are being utilized well or if you need to do more tweaking.
69+
70+
## What does this look like on AWS?
71+
The following five are the primary resources that Distributed-CellProfiler interacts with.
72+
After you have finished [preparing for Distributed-CellProfiler](step_0_prep), you do not need to directly interact with any of these services outside of Distributed-CellProfiler.
73+
If you would like a granular view of what Distributed-CellProfiler is doing while it runs, you can open each console in a separate tab in your browser and watch their individual behaviors, though this is not necessary, especially if you run the [monitor command](step_4_monitor.md) and/or have DS automatically create a Dashboard for you (see [Configuration](step_1_configuration.md)).
74+
* [S3 Console](https://console.aws.amazon.com/s3)
75+
* [EC2 Console](https://console.aws.amazon.com/ec2/)
76+
* [ECS Console](https://console.aws.amazon.com/ecs/)
77+
* [SQS Console](https://console.aws.amazon.com/sqs/)
78+
* [CloudWatch Console](https://console.aws.amazon.com/cloudwatch/)
Lines changed: 57 additions & 67 deletions
Original file line numberDiff line numberDiff line change
@@ -1,110 +1,100 @@
11
# Step 0: Prep
2+
There are two classes of AWS resources that Distributed-CellProfiler interacts with: 1) infrastructure that is made once per AWS account to enable any Distributed-CellProfiler implementation to run and 2) infrastructure that is made and destroyed with every run.
3+
This section describes the creation of the first class of AWS infrastructure and only needs to be followed once per account.
24

3-
Distributed-CellProfiler runs many parallel jobs in EC2 instances that are automatically managed by ECS.
4-
To get jobs started, a control node to submit jobs and monitor progress is needed.
5-
This section describes what you need in AWS and in the control node to get started.
6-
This guide only needs to be followed once per account.
7-
(Though we recommend each user has their own control node, further control nodes can be created from an AMI after this guide has been followed to completion once.)
8-
9-
10-
## 1. AWS Configuration
5+
## AWS Configuration
6+
The AWS resources involved in running Distributed-CellProfiler are configured using the [AWS Web Console](https://aws.amazon.com/console/) and a setup script we provide ([setup_AWS.py](../../setup_AWS.py)).
7+
You need an active AWS account configured to proceed.
8+
Login into your AWS account, and make sure the following list of resources is created:
119

12-
The AWS resources involved in running Distributed-CellProfiler can be primarily configured using the [AWS Web Console](https://aws.amazon.com/console/).
13-
The architecture of Distributed-CellProfiler is based in the [worker pattern](https://aws.amazon.com/blogs/compute/better-together-amazon-ecs-and-aws-lambda/) for distributed systems.
14-
We have adapted and simplified that architecture for Distributed-CellProfiler.
15-
16-
You need an active account configured to proceed. Login into your AWS account, and make sure the following list of resources is created:
17-
18-
### 1.1 Access keys
19-
* Get [security credentials](http://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html) for your account.
10+
### 1.1 Manually created resources
11+
* **Security Credentials**: Get [security credentials](http://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html) for your account.
2012
Store your credentials in a safe place that you can access later.
21-
* You will probably need an ssh key to login into your EC2 instances (control or worker nodes).
13+
* **SSH Key**: You will probably need an ssh key to login into your EC2 instances (control or worker nodes).
2214
[Generate an SSH key](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-key-pairs.html) and store it in a safe place for later use.
2315
If you'd rather, you can generate a new key pair to use for this during creation of the control node; make sure to `chmod 600` the private key when you download it.
24-
25-
### 1.2 Roles and permissions
26-
* You can use your default VPC, subnet, and security groups; you should add an inbound SSH connection from your IP address to your security group.
27-
* [Create an ecsInstanceRole](http://docs.aws.amazon.com/AmazonECS/latest/developerguide/instance_IAM_role.html) with appropriate permissions (An S3 bucket access policy CloudWatchFullAccess, CloudWatchActionEC2Access, AmazonEC2ContainerServiceforEC2Role policies, ec2.amazonaws.com as a Trusted Entity)
28-
* [Create an aws-ec2-spot-fleet-tagging-role](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/spot-fleet-requests.html) with appropriate permissions (just needs AmazonEC2SpotFleetTaggingRole); ensure that in the "Trust Relationships" tab it says "spotfleet.amazonaws.com" rather than "ec2.amazonaws.com" (edit this if necessary).
29-
In the current interface, it's easiest to click "Create role", select "EC2" from the main service list, then select "EC2- Spot Fleet Tagging".
16+
* **SSH Connection**: You can use your default AWS account VPC, subnet, and security groups.
17+
You should add an inbound SSH connection from your IP address to your security group.
18+
19+
### 1.2 Automatically created resources
20+
* Run setup_AWS by entering `python setup_AWS.py` from your command line.
21+
It will automatically create:
22+
* an [ecsInstanceRole](http://docs.aws.amazon.com/AmazonECS/latest/developerguide/instance_IAM_role.html) with appropriate permissions.
23+
This role is used by the EC2 instances generated by your spot fleet request and coordinated by ECS.
24+
* an [aws-ec2-spot-fleet-tagging-role](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/spot-fleet-requests.html) with appropriate permissions.
25+
This role grants the Spot Fleet the permissions to request, launch, terminate, and tag instances.
26+
* an SNS topic that is used for triggering the auto-Monitor.
27+
* a Monitor lambda function that is used for auto-monitoring of your runs (see [Step 4: Monitor](step_4_monitor.md) for more information).
3028

3129
### 1.3 Auxiliary Resources
30+
*You can certainly configure Distributed-CellProfiler for use without S3, but most DS implementations use S3 for storage.*
3231
* [Create an S3 bucket](http://docs.aws.amazon.com/AmazonS3/latest/gsg/CreatingABucket.html) and upload your data to it.
33-
* Add permissions to your bucket so that [logs can be exported to it](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/S3ExportTasksConsole.html) (Step 3, first code block)
34-
* [Create an SQS](http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSGettingStartedGuide/CreatingQueue.html) queue for unprocessable-messages to be dumped into (aka a DeadLetterQueue).
35-
36-
### 1.4 Primary Resources
37-
The following five are the primary resources that Distributed-CellProfiler interacts with.
38-
After you have finished preparing for Distributed-CellProfiler (this guide), you do not need to directly interact with any of these services outside of Distributed-CellProfiler.
39-
If you would like a granular view of [what Distributed-CellProfiler is doing while it runs](overview_2.md), you can open each console in a separate tab in your browser and watch their individual behaviors, though this is not necessary, especially if you run the [monitor command](step_4_monitor.md) and/or enable auto-Dashboard creation in your [configuration](step_1_configuration.md).
40-
* [S3 Console](https://console.aws.amazon.com/s3)
41-
* [EC2 Console](https://console.aws.amazon.com/ec2/)
42-
* [ECS Console](https://console.aws.amazon.com/ecs/)
43-
* [SQS Console](https://console.aws.amazon.com/sqs/)
44-
* [CloudWatch Console](https://console.aws.amazon.com/cloudwatch/)
45-
46-
### 1.5 Spot Limits
32+
Add permissions to your bucket so that [logs can be exported to it](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/S3ExportTasksConsole.html) (Step 3, first code block).
33+
34+
### 1.4 Increase Spot Limits
4735
AWS initially [limits the number of spot instances](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-spot-limits.html) you can use at one time; you can request more through a process in the linked documentation.
4836
Depending on your workflow (your scale and how you group your jobs), this may not be necessary.
4937

50-
## 2. The Control Node
51-
The control node can be your local machine if it is configured properly, or it can also be a small instance in AWS.
38+
## The Control Node
39+
The control node is a machine that is used for running the Distributed-CellProfiler scripts.
40+
It can be your local machine, if it is configured properly, or it can also be a small instance in AWS.
5241
We prefer to have a small EC2 instance dedicated to controlling our Distributed-CellProfiler workflows for simplicity of access and configuration.
53-
To login in an EC2 machine you need an ssh key that can be generated in the web console.
42+
To login in an EC2 machine you need an SSH key that can be generated in the web console.
5443
Each time you launch an EC2 instance you have to confirm having this key (which is a .pem file).
5544
This machine is needed only for submitting jobs, and does not have any special computational requirements, so you can use a micro instance to run basic scripts to proceed.
45+
(Though we recommend each user has their own control node, further control nodes can be created from an AMI after this guide has been followed to completion once.)
5646

5747
The control node needs the following tools to successfully run Distributed-CellProfiler.
58-
Here we assume you are using the command line in a Linux machine, but you are free to try other operating systems too.
48+
These instructions assume you are using the command line in a Linux machine, but you are free to try other operating systems too.
5949

60-
### 2.1 Make your own control node
50+
### Create Control Node from Scratch
51+
#### 2.1 Install Python 3.8 or higher and pip
52+
Most scripts are written in Python and support Python 3.8 and 3.9.
53+
Follow installation instructions for your platform to install Python.
54+
pip should be included with the installation of Python 3.8 or 3.9, but if you do not have it installed, install pip.
6155

62-
#### 2.1.1 Clone this repo
56+
#### 2.2 Clone this repository and install requirements
6357
You will need the scripts in Distributed-CellProfiler locally available in your control node.
6458
<pre>
6559
sudo apt-get install git
6660
git clone https://github.com/DistributedScience/Distributed-CellProfiler.git
6761
cd Distributed-CellProfiler/
6862
git pull
69-
</pre>
70-
71-
#### 2.1.2 Python 3.8 or higher and pip
72-
Most scripts are written in Python and support Python 3.8 and 3.9.
73-
Follow installation instructions for your platform to install python and, if needed, pip.
74-
After Python has been installed, you need to install the requirements for Distributed-CellProfiler following this steps:
75-
76-
<pre>
77-
cd Distributed-CellProfiler/files
63+
# install requirements
64+
cd files
7865
sudo pip install -r requirements.txt
7966
</pre>
8067

81-
#### 2.1.3 AWS CLI
68+
#### 2.3 Install AWS CLI
8269
The command line interface is the main mode of interaction between the local node and the resources in AWS.
83-
Follow AWS instructions to install [AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html).
84-
Then set up AWS CLI with:
70+
You need to install [awscli](http://docs.aws.amazon.com/cli/latest/userguide/installing.html) for Distributed-CellProfiler to work properly:
71+
8572
<pre>
73+
sudo pip install awscli --ignore-installed six
74+
sudo pip install --upgrade awscli
8675
aws configure
8776
</pre>
8877

89-
When running the last step, you will need to enter your AWS credentials.
78+
When running the last step (`aws configure`), you will need to enter your AWS credentials.
9079
Make sure to set the region correctly (i.e. us-west-1 or eu-east-1, not eu-west-2a), and set the default file type to json.
9180

9281
#### 2.1.4 s3fs-fuse (optional)
9382
[s3fs-fuse](https://github.com/s3fs-fuse/s3fs-fuse) allows you to mount your s3 bucket as a pseudo-file system.
9483
It does not have all the performance of a real file system, but allows you to easily access all the files in your s3 bucket.
9584
Follow the instructions at the link to mount your bucket.
9685

97-
#### 2.1.5 Parallel (optional)
98-
Parallel is an optional Linux tool that you can install on your control node for generating job files using the `batches.sh` scripting tool.
99-
If you use other ways of generating job files (e.g. `run_batch_general.py`) you do not need parallel.
100-
To install parallel, run:
101-
<pre>
102-
sudo apt-get install parallel
103-
</pre>
104-
105-
#### 2.1.6 Create a Control Node AMI (optional)
86+
### Create Control Node from AMI (optional)
10687
Once you've set up the other software (and gotten a job running, so you know everything is set up correctly), you can use Amazon's web console to set this up as an Amazon Machine Instance, or AMI, to replicate the current state of the hard drive.
10788
Create future control nodes using this AMI so that you don't need to repeat the above installation.
10889

109-
### 2.2 Use a pre-made AMI
110-
You can use our [Cytominer-VM](https://github.com/cytomining/cytominer-vm) and add your own security keys; it has extra things you may not need, such as R, but it can be very handy!
90+
## Removing long-term infrastructure
91+
If you decide that you never want to run Distributed-CellProfiler again and would like to remove the long-term infrastructure, follow these steps.
92+
93+
### Remove Roles, Lambda Monitor, and Monitor SNS
94+
<pre>
95+
python setup_AWS.py destroy
96+
</pre>
97+
98+
### Remove EC2 Control node
99+
If you made your control node as an EC2 instance, while in the AWS console, select the instance.
100+
Select `Instance state` => `Terminate instance`.

documentation/DCP-documentation/step_1_configuration.md

Lines changed: 13 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -55,6 +55,7 @@ This can safely be set to 0 for workflows that don't require much memory or exec
5555
* **SQS_MESSAGE_VISIBILITY:** How long each job is hidden from view before being allowed to be tried again.
5656
We recommend setting this to slightly longer than the average amount of time it takes an individual job to process- if you set it too short, you may waste resources doing the same job multiple times; if you set it too long, your instances may have to wait around a long while to access a job that was sent to an instance that stalled or has since been terminated.
5757
* **SQS_DEAD_LETTER_QUEUE:** The name of the queue to send jobs to if they fail to process correctly multiple times; this keeps a single bad job (such as one where a single file has been corrupted) from keeping your cluster active indefinitely.
58+
This queue will be automatically made if it doesn't exist already.
5859
See [Step 0: Prep](step_0_prep.med) for more information.
5960

6061
***
@@ -65,6 +66,18 @@ See [Step 0: Prep](step_0_prep.med) for more information.
6566

6667
***
6768

69+
### MONITORING
70+
* **AUTO_MONITOR:** Whether or not to have Auto-Monitor automatically monitor your jobs.
71+
72+
***
73+
74+
### CLOUDWATCH DASHBOARD CREATION
75+
76+
* **CREATE_DASHBOARD:** Create a Cloudwatch Dashboard that plots run metrics?
77+
* **CLEAN_DASHBOARD:** Automatically clean up the Cloudwatch Dashboard at the end of the run?
78+
79+
***
80+
6881
### REDUNDANCY CHECKS
6982

7083
* **CHECK_IF_DONE_BOOL:** Whether or not to check the output folder before proceeding.

0 commit comments

Comments
 (0)