Among the most studied invariants for linear block codes, there are the generalized weights. The interest in these invariants stems from the fact that they measure information leakage, in the context of wire-tap channels of type II, as was proved by Wei. In 1994, Forney pointed out that these weights could be also extended to convolutional codes. Motivated by this observation, in 1997, Rosenthal and York gave a first definition of generalized Hamming weights of convolutional codes. Inspired by their work, in this talk, we propose a new class of generalized weights that takes into account the underlying module structure. We derive their basic properties and we show that they are a natural extension of the generalized weights of linear block codes. We also briefly discuss the relation with the other preexisting definition. In the last part of the talk, we provide an upper bound on the weight hierarchy of MDS codes. This talk is based on a joint work with Elisa Gorla.