Binary asymmetric channels are channels over the binary alphabet {0,1}, in which the probability that a 0 becomes a 1 and the probability that a 1 becomes a 0 are possibly different. They generalize both the classical symmetric channel and the Z-channel (which correspond to extreme, simpler cases). In this talk we describe the mathematical properties of the binary asymmetric channel and the structure of binary codes in connection with it. Our work was motivated by the recent connection established between neuroscience and coding theory. The latter lies in the fact that codes for binary asymmetric channels can be seen as a discretization of receptive fields, which model the neurons’ response to stimuli. The focus of this talk is on the structural properties and parameters of binary error-correcting codes for asymmetric channels. We introduce two notions of discrepancy between binary vectors, which are not distance functions in general but nonetheless match the binary asymmetric channels (in a precise information theory sense). These notions define fundamental parameters which, in particular, allow us to measure the ''asymmetric'' performance of a combinatorial neural code. We then show connections between the discrepancy functions and the more traditional Hamming distance. Despite this, techniques from classical coding theory do not easily extend to the discrepancy setting. We argue this point and we establish upper and lower bounds for the code parameters.