Fig. 1

Sketch of the neural network architecture of DeepGRP. The architecture is similar to that of [11], but has an additional attention layer. The input flows along the black arrows, while the hidden states pass information along the dashed lines. The first input is the DNA sequence while the second input is the Watson-Crick complement of the sequence. Both sequences are one-hot encoded