Older blog entries for async (starting at number 82)

Jfleck:

cont'd from our conversation: some more references on wavelets

http://engineering.rowan.edu/~polikar/WAVELETS/WTtutorial.html
http://amath.colorado.edu/faculty/naveau/PUBLI/solar_Oh.pdf

the first is a tutorial and the second is a paper on the data analysis of sunspots and climate and various things using wavlets.

also (tangentially) there seems to be some debate as to whether or not the experiment done concerning the speed of gravity actually measures the right thing. (july 20 astrophysical journal letters).

here is a link with a summary from the kids who do the journal science. (free reg or science subscription for access).

i have begun delimiting blocks in my python program with

# end <whatever began the block>

this is both for my benefit and my benefit as well, making it a bit easier for me to find the next block and fix things when i accidently unindent something and then forget at what level it goes. i should make my editor understand this convention as well, someday.

Look upon my works, ye Mighty, and despair!

daniels :

Birthdays

don't worry when you are 23, you can legally kill a person. it's great.

errorists

overrun, yes, but they are so cute and cuddley, you should hardly care. just sitting all day in the eucalyptus trees and then blowing up stuff every now and again.

Chicago:
here's a schematic of a small, complete 8051 system which will serve to demonstrate some things.

the chips (iirc) are

  • 87C52 - intel 8052 uC w/ internal eprom
  • 39F512 - flash rom
  • 62256 - static ram
  • 82C55 - parallel port io expander
  • 74HC373 - octal latch (D inputs are latched to Q when CK toggles)
  • 74HC138 - 3:7 (ABC is a number between 0-7 which causes the associated Yx to go low)
So one of the first thing to notice is the octal latch. the 8051 is an embedded microcontroller with integrated timer/io/serial uart. in order to get all these in a 40pin package and still have an external memory bus, they multiplex the the data and address lines together.

so during a memory cycle the address to be used A[15:0] is output via D[7:0] and A[15:8], then the ALE (addr latch enable) goes high which causes the '373 to latch the databus and output it on the A[7:0] lines.

(actually i guess i fudged it last time around: this mcu has WR, RD, PSEN lines--which is interesting because memory is partitioned so that there is a 64kB bank of general ram, and a 64kB bank of program code. so it's sort of like a harvard split bus design only with merged busses(!!). when a memory read into data ram is performed the RD line goes high, whereas when a memory read into the program ram is performed the PSEN line goes high. few people actually use the PSEN line separately though, so here they are just OR'ed together.)

the '138 takes A[10:8] and decodes them to select either a ram bank, flash rom bank, or one of the two parallel IO chips (via the CS).

this design actually seems a bit more complicated than is strictly necessary but it gives you all the connections to see.

if the cpu is performing a write operation it will output the data onto the bus now. (the octal latch keeps outputting the address since the ALE doesn't toggle).

after the address is setup, the cpu tbrings WR or RD or PSEN high. this causes the appropriate CS WR RD lines to the other chips to have them do their thing.

after a delay to give the chips time to process the address and read the input or produce an output, the WR/RD/PSEN lines are brought low, and if the cpu was performing a read operation, it latches whatever is on the data bus.

--------

harvard architecture usually refers to a cpu having two sets of caches two sets of data busses and two sets of address busses: one for data (constants and variables) and the other for program storage. this lets you keep your caches clean, fetch data and program code at the same time, etc. but notice that you run into a bit of conundrum when you want to treat program code as data (not a technical issue, more of a philosophical one).

von neumann is just the opposite (single bus for both), but the term can also be used to refer to the generic concept of a stored-program computer.

data sheets for cpu's have all the info you need to connect stuff up, and looking at a few example systems is useful. you want to especially pay attention to timing diagrams of the signals and busses. start out with simple chips and not high-end cpu's because they have tons of stuff which deals with implementation problems and making things go fast rather than just the fundemental issues.

if you want a book which gives an in-depth example (as well as being a generally really really useful book) pickup 'the art of electronics 2nd ed' by horwitz & hill. which gives an example from design requirements to hardware to firmware for a lab instrument.

with regard to the data bigger than the bus, you just do it in a piecewise fashion. for example if you have an 8bit bus, and you want to transfer a 32 bit value, you make 4 separate complete accesses to 4 adjacent locations in ram.

as for designing your own cpu's you might try looking into getting access to or buying an FPGA dev kit. they're fun to toy with and fairly cheap. (your school probably has a lab of them too). with that you can actually sit down and design a processor and then bootstrap it in hardware. but don't get your hopes too far up to think you can implement something state of the art: to get enough gates to do something comparable of a current high-end processor will cost you upwards of 5 or 10K USD per chip. although at that point you can get chips with 600 io lines and gigabit io speeds and wierd stuff like that.

hopefully some of that made sense.

ok some some links:

  • atmel microcontrollers: should be able to find lots of data sheets, appnotes, etc for a couple of different architectures: 8051 derivative as seen above, an 8-bit risc uC of their own design, and some embedded ARM parts.
  • opencores: free (as in speech) hardware designs for gate arrays and fpgas. they have a number of different cpu's from microcontrollers to mips clones (also take a look at leon done by some guy at the european space agency which is SPARC compliant cpu that has been implemented in FPGA). they also have lots of links to fpga hardware and discussion lists.
  • def of harvard arch @ wikipedia
  • cpu design howto (i have no idea about the quality of info, i just glanced at it, and if nothing else it has some links).

weeeeeeeeeeee.

Chicago :

i don't see exactly what you're asking with respect to connecting the processor to the rest of the system.

do you mean the tradeoffs you make when designing a chip? or just taking an existing chip and creating the support hardware? i'll throw some stuff out there that hopefully will be interesting.

if you are designing, the constraints are usually fabrication/cost/heat issues. for example, you can make the external bus interface as wide as you like, but it takes pins and power to do that. more pins means a larger, more expensive package, more current draw, more board space, etc. you can see this trade off at work in some of the newer memory types like rambus which trade bit-width for signal speed. another example: Harvard architecture RISC usually has separate instruction and data busses. this means a huge number of pins for I and D even for a 32bit system. when designers started making chips for smaller,lower cost systems many designs collapsed the busses into a more traditional von neumann single bus. internally, the issues are similar, but revolve around getting signals from point a to point b, and chip real-estate, heat dissipation and the like.

higher-end chips concentrate much more interest in the ALU, having multiple ALU's of multiple types fully pipelined with re-ording and all sorts of things. likewise, there are small cpu cores meant to be instantiated in FPGA's, CPLD's, and gate arrays that are internally big state-machines. this makes for a very fast cpu (which minimizes the hit you take for instantiating it in non-custom hardware). this trade off is also visible in VAX vs RISC. vax cpus had tons of modes and instructions implemented in microcode whereas RISC tends towards simple, easily decodable instructions.

systems are usually connected in a two ways: shared bus or point-to-point. shared bus is cheaper, requires less board space and hardware, but doesn't have as much bandwidth. on the other hand, it (p2p) has a fixed (at design time) number of peers it can connect with, whereas the shared bus can hang as many peers off the network as the electrical signal characteristics allow (with the associated degradation in performance). daisy chaining could then be a sort of point to point topology with some routing built in (it could be seen as either actually). look at stuff like the connection machines and cray MP machines that usually are layed out has hypercubes or 3d toruses).

reading and writing through the same pins is accomplished with 3-state driver hardware built into the output. for every signal a 3-state driver is attached to it for output, and one for input. each 3-state driver has an enable which works as follows: when the enable is on, the driver acts as just an OR gate with it's inputs tied together, whatever is on the input is transferred to the output. when the enable is off the 3-state driver acts as an open circuit. so any incoming signal has no influence on the output (which is said to 'float in a high impedence state/Z/hi-Z').

externally, this is co-ordinated with some bus protocol. most simply: a single line "R/*W" which is high when the bus is receiving input, and low when the cpu is outputting to the bus. (this is the intel way, motorola usually has separate R and W lines iirc).

an FPGA usually has a config option to determine what sort of io you want for each pin. if you are using discretes, you just use a bunch of 3 state buffers.

in anycase, hopefully this sort of answered something in your questions, if not feel free to ask again or more specifically, and i'll try to answer. but i'm only a hack. if anyone else knows feel free to add or correct.

Threads, Terrorism, and You:

Ousterhout sums up threads vs state machines rather well.

(reproduced above with permission)

the take home points are:

  • threads are hard and should be avoided except where actual cpu concurrency is required.
  • event-based architectures can be used instead
StevenRainwater :

Equal and opposite action? - After the Enterprise has rammed the much larger alien starship, the two ships are adrift and locked together in a mass of wreckage. The Enterprise has no power. The alien ship orders full reverse thrust and begins tearing itself away from the Enterprise, which somehow remains motionless in space. What force is holding the Enterprise in place against the thrust exerted by the Alien ship?

unless i misunderstand, it would be inertia.

Consulting & Contracting:

Uche: thanks for your reply. everything, it seems, comes down to determination and doing the hard work necessary to make things happen. funny how that works.

i've been using 4suite's XPath stuff lately. it makes doing XML very bearable.

the last time i used XML for anything, i did all the navigation by hand which was a little like writing a parser by hand (sort of killing the benefit of xml in the first place). i ended up spending time trying to figure out which extra tab or text element was causing it to barf and stuff like that. XPath in comparison makes everything fairly pain free.

hooray!

Consulting & Contracting:

i need to find a job, or more specifically, i need to make money. i would prefer to stay in the area i'm at, but i've not found any interesting full-time employment (granted i haven't tried all that hard).

it would be peachy if i could do something without leaving the area (working remotely even) on a flexible schedule (while finishing my master's degree).

i've read a few books, and i've done some consulting work before, but i'm in no way experienced. ideally, i'd like to start out doing completely custom work, and then transition to customizing a shrink-wrapped product i sell. (this looks like what Uche and co. does with Fourthought).

i know a number of folks on advo have experience, so all the obvious questions: best way of getting clients, how do i bootstrap, what's a good business model, and most importantly how do i make the most of the least effort :D. i'm sure those people can also think of questions that i want answered but which i don't know i want answered. what's the low-down?

i'm really intent on making something fly. living on a student's wages isn't. i figure i could get by with $2k a month or there about, and i've got a fair amount of spare change.

any thoughts?

Research:

i think i'm seeing how the whole research thing works somewhat better. assuming you have the tools necessary (requisite math background, etc). the first thing you have to do is become familar with the state of things.

The hard part is actually finding what constitutes a particular field. Since they tend to be very large compared to the size of a problem any one person in that field addresses. To be on the safe side, reading a few general textbooks on the subject would be a good start. if they're new, it may even be worthwhile to chase down references, but perhaps it's more important to pay attention to where those references are published and then start reading random more recent articles from that publication (assuming it's a journal or conference that still exists).

The only way to bootstrap the process of narrowing down the field to a question you want to research is to iterate over a smaller and smaller portion of the literature. hopefully one will get a sense of the major divisions of the field from the textbooks and you can begin to classify where research falls into those fields. Again the important task while reading is to look at 1) What problem does the paper attack, 2) What is the contribution of the paper, 3) Where does it lie with respect to other approaches in the field (and what does it build on, references, etc) , 4) What further questions or problems does it raise?

i think also it's helpful to write these things down into summaries. i've found i quickly forget papers i've read, and even if i recall some higher level information from them, i can't remember which paper i learned it from. (it's also useful if you have to eventually sit for a comprehensive qualifier).

after doing this for awhile, you should be able to build a sort of taxonomy of the field. at this point you can attempt to figure out 1.) what is the state of the art in the field--what are the major approaches, 2.) what open questions or problems exist or are being worked on in the field. 3.) what problems are open to attack, 4) (last and not least) what do you find interesting?

i suspect you will have to narrow down things significantly to make this tractable. and with the 3rd issue, hopefully, by reading through the literature, one will gain some appreciation for what is possible and worthwhile.

at this point one may finally be in a position to let one's brain chew on the problem, throw things at the problem and see what sticks.

so this all is pretty obvious, and it's basically what Hamming and others have said, (which i've referenced previously). but i guess you just have to sit down, understand the mechanics, and convince yourself of the validity of their approach.

(as an aside, i think this is also why when people outside a field come up with an idea, but use their own vocabulary when talking about it, are looked down upon by those in the field. it shows that the person hasn't done certain things (which are completely within their power) to understand where their idea fits in and whether it is actually of interest. and if they haven't spent the time to do that, why should you?)

73 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!