Here is the docs on input shapes for LSTMs: Input shapes. 3D tensor with shape (batch_size, timesteps, input_dim), (Optional) 2D tensors with shape (batch_size, output_dim). Which implies that you you're going to need timesteps with a constant size for each batch. The canonical way of doing this is padding your sequences using something like keras's padding utility. then you can try:

*LSTM (Long-Short Term Memory) is a type of Recurrent Neural Network and it is used to learn a sequence data in deep learning. In this post, we'll learn how to apply LSTM for binary text classification problem.*Aug 31, 2019 · ConvNet Input Shape Input Shape. You always have to give a 4D array as input to the CNN. So input data has a shape of (batch_size, height, width, depth), where the first dimension represents the batch size of the image and other three dimensions represent dimensions of the image which are height, width and depth.