# Older blog entries for sness (starting at number 5017)

Berlekamp–Massey algorithm - Wikipedia, the free encyclopedia

Berlekamp–Massey algorithm - Wikipedia, the free encyclopedia: "The Berlekamp–Massey algorithm is an algorithm that will find the shortest linear feedback shift register (LFSR) for a given binary output sequence. The algorithm will also find the minimal polynomial of a linearly recurrent sequence in an arbitrary field.[1]"

'via Blog this'

Syndicated 2013-01-27 18:13:00 from sness

Low-density parity-check code - Wikipedia, the free encyclopedia

Low-density parity-check code - Wikipedia, the free encyclopedia: "In information theory, a low-density parity-check (LDPC) code is a linear error correcting code, a method of transmitting a message over a noisy transmission channel,[1][2] and is constructed using a sparse bipartite graph.[3] LDPC codes are capacity-approaching codes, which means that practical constructions exist that allow the noise threshold to be set very close (or even arbitrarily close on the BEC) to the theoretical maximum (the Shannon limit) for a symmetric memory-less channel. The noise threshold defines an upper bound for the channel noise, up to which the probability of lost information can be made as small as desired. Using iterative belief propagation techniques, LDPC codes can be decoded in time linear to their block length."

'via Blog this'

Syndicated 2013-01-27 18:12:00 from sness

Reed–Solomon error correction - Wikipedia, the free encyclopedia

Reed–Solomon error correction - Wikipedia, the free encyclopedia: "In coding theory, Reed–Solomon (RS) codes are non-binary[1] cyclic error-correcting codes invented by Irving S. Reed and Gustave Solomon. They described a systematic way of building codes that could detect and correct multiple random symbol errors. By adding t check symbols to the data, an RS code can detect any combination of up to t erroneous symbols, or correct up to ⌊t/2⌋ symbols. As an erasure code, it can correct up to t known erasures, or it can detect and correct combinations of errors and erasures. Furthermore, RS codes are suitable as multiple-burst bit-error correcting codes, since a sequence of b + 1 consecutive bit errors can affect at most two symbols of size b.[2] The choice of t is up to the designer of the code, and may be selected within wide limits."

'via Blog this'

Syndicated 2013-01-27 18:10:00 from sness

Silver fox (animal) - Wikipedia, the free encyclopedia

Silver fox (animal) - Wikipedia, the free encyclopedia: "The Silver Fox (Vulpes vulpes) is a melanistic form of red fox. "

Syndicated 2013-01-27 17:53:00 from sness

What We Learned in 2012 · An A List Apart Article

What We Learned in 2012 · An A List Apart Article: "Design systems, not screens

More than half of U.S. laptop owners now also own a smartphone, and nearly a quarter of them own a tablet too (source). And, of course, with the holiday season past us, the number of users who own a device in all three categories will jump higher still. Users move between devices so fluidly, and in patterns that we often can’t predict. Now apps are starting to connect to other devices to control, synchronize or extend an experience.

I think we’re going to see more cross-channel design thinking in 2013 to address simultaneous multi-device usage, and frequent device hopping in a single workflow. Continuity between platforms will be important, but we don’t need to make the experience the same between devices. The user experience will morph with each context. We’ll need to design systems, not screens, to solve cross-channel experience design problems."

'via Blog this'

Syndicated 2013-01-27 17:47:00 from sness

What We Learned in 2012 · An A List Apart Article

What We Learned in 2012 · An A List Apart Article: "For me 2012 was a year of experimentation. I learned that the more certain you are about something, or the longer you’ve been doing things one way, the more important it is to abandon your assumptions and try the complete opposite. The more embedded your assumptions are, the less you notice them—so this is not easy.

"

'via Blog this'

Syndicated 2013-01-27 17:46:00 from sness

Optimal substructure - Wikipedia, the free encyclopedia

Optimal substructure - Wikipedia, the free encyclopedia: "In computer science, a problem is said to have optimal substructure if an optimal solution can be constructed efficiently from optimal solutions of its subproblems. This property is used to determine the usefulness of dynamic programming and greedy algorithms for a problem.[1]"

'via Blog this'

Syndicated 2013-01-27 08:48:00 from sness

Dynamic programming - Wikipedia, the free encyclopedia

Dynamic programming - Wikipedia, the free encyclopedia: "Optimal substructure means that the solution to a given optimization problem can be obtained by the combination of optimal solutions to its subproblems. Consequently, the first step towards devising a dynamic programming solution is to check whether the problem exhibits such optimal substructure. Such optimal substructures are usually described by means of recursion. For example, given a graph G=(V,E), the shortest path p from a vertex u to a vertex v exhibits optimal substructure: take any intermediate vertex w on this shortest path p. If p is truly the shortest path, then the path p1 from u to w and p2 from w to v are indeed the shortest paths between the corresponding vertices (by the simple cut-and-paste argument described in Introduction to Algorithms). Hence, one can easily formulate the solution for finding shortest paths in a recursive manner, which is what the Bellman-Ford algorithm or the Floyd-Warshall algorithm does.
"

'via Blog this'

Syndicated 2013-01-27 08:48:00 from sness

5008 older entries...