/images/avatar.png

my Blog for Notes

What we're talking about when we talk about shared memory parallelism - Boosting

This time, I brought the first draft of the paper to be submitted for the seminar, which is a way to be lazy. A brief Overview on Shared Memory Parallelism in Parallel Computing 1. Why parallel? To speed up, while we are facing the limitation of current transistors and increasing energy consumption. Now that we know it is necessary and lots of privilege besides you need to reconstruct your program yourself rather than automatically distributed by APIs in a serial way.

Generation of height maps based on diamond-square-algorithm in C++

In fact, many times we see these renderings are really amazing things, such as some terrain seems to be three-dimensional, but in fact can be generated by two-dimensional, that is, for each pixel point to give a height value, these height values constitute a column of points and can be converted into a two-dimensional image to store, when needed, the corresponding terrain can be generated from this picture. As for the introduction, we are only concerned with using normal distribution or diamond square to assign values, terrain generator about color and normal value images we will talk about later.

DFA and NFA -- 2

From part 1 we can recognize what DFA and NFA look like and know the equivalence between them. In “From Infinite Automata to Regular Languages” we know the equivalence between DFA, NFA and regular expressions and regular languages, and there are some tools such as pump priming and Ardens Lemma to either prove or convert them. We have made some progress in the ‘art’ level so far, so we might

From Finite Automata to Regular Languages - Implementation Part

This article can be seen as “from infinite automata to regular expressions” of this additional article, mainly to provide JAVA to achieve from DFA to RE conversion. Because the code is copied from the assignment, I don’t know if there will be any copyright problem, but this is not considered commercial use, and some of the code is written by me, so it should not be a big problem. (The

What we're talking about when we talk about shared memory parallelism - a precursor

I’m going to start a new series on the Theo and Network series that I’m currently working on, which is the parallel computing series that I’ve seen so far. This semester, I was assigned the topic “Shared Memory Parallelization”, which is the title of the seminar. Because I need to submit a paper and do a presentation at the end, I think I can also open a new series to