CPU Mining with a large EC2 cluster - Bitcoin Stack Exchange


Aeon (AEON) is a private, secure, untraceable currency. You are your bank, you control your funds, and nobody can trace your transfers.

DAG Technology Analysis and Measurement

The report produced by the fire block chain coins Institute, author: Yuan Yuming, Hu Zhiwei, PDF version please read the original text download
The Fire Coin Blockchain Application Research Institute conducts research on distributed ledger technology based on directed acyclic graph (DAG) data structure from a technical perspective, and through the specific technical test of typical representative project IOTA, the main research results are obtained:
Report body
1 Introduction
Blockchain is a distributed ledger technology, and distributed ledger technology is not limited to the "blockchain" technology. In the wave of digital economic development, more distributed ledger technology is being explored and applied in order to improve the original technology and meet more practical business application scenarios. Directed Acylic Graph (hereinafter referred to as "DAG") is one of the representatives.
What is DAG technology and the design behind it? What is the actual application effect?We attempted to obtain analytical conclusions through deep analysis of DAG technology and actual test runs of representative project IOTA.
It should also be noted that the results of the indicator data obtained from the test are not and should not be considered as proof or confirmation of the final effect of the IOTA platform or project. Hereby declare.
2. Main conclusions
After research and test analysis, we have the following main conclusions and technical recommendations:
3.DAG Introduction
3.1. Introduction to DAG Principle
DAG (Directed Acyclic Graph) is a data structure that represents a directed graph, and in this graph, it cannot return to this point (no loop) from any vertex, as shown in the figure. Shown as follows:
After the DAG technology-based distributed ledger (hereinafter referred to as DAG) technology has been proposed in recent years, many people think that it is hopeful to replace the blockchain technology in the narrow sense. Because the goal of DAG at design time is to preserve the advantages of the blockchain and to improve the shortcomings of the blockchain.
Different from the traditional linear blockchain structure, the transaction record of the distributed ledger platform represented by IOTA forms a relational structure with a directed acyclic graph, as shown in the following figure.
3.2. DAG characteristics
Due to the different data structure from the previous blockchain, the DAG-based distributed ledger technology has the characteristics of high scalability, high concurrency and is suitable for IoT scenarios.
3.2.1. High scalability, high concurrency
The data synchronization mechanism of traditional linear blockchains (such as Ethereum) is synchronous, which may cause network congestion. The DAG network adopts an asynchronous communication mechanism, allowing concurrent writing. Multiple nodes can simultaneously trade at different tempos without having a clear sequence. Therefore, the data of the network may be inconsistent at the same time, but it will eventually be synchronized.

3.2.2. Applicable to IoT scenarios

In the traditional blockchain network, there are many transactions in each block. The miners are packaged and sent uniformly, involving multiple users. In the DAG network, there is no concept of “block”, the smallest unit of the network. It is a "transaction", each new transaction needs to verify the first two transactions, so the DAG network does not need miners to pass the trust, transfer does not require a fee, which makes DAG technology suitable for small payments.
4. Analysis of technical ideas
Trilemma, or "trilemma", means that in a particular situation, only two of the three advantageous options can be selected or one of the three adverse choices must be chosen. This type of selection dilemma has related cases in various fields such as religion, law, philosophy, economics, and business management.Blockchain is no exception. The impossible triangle in the blockchain is: Scalability, Decentralization, and Security can only choose two of them.
If you analyze DAG technology according to this idea, according to the previous introduction, then DAG has undoubtedly occupied the two aspects of decentralization and scalability. The decentralization and scalability of the DAG can be considered as two-sided, because of the asynchronous accounting features brought about by the DAG data structure, while achieving the high degree of decentralization of the participating network nodes and the scalability of the transaction.
5. There is a problem
Since the characteristics of the data structure bring decentralization and scalability at the same time, it is speculated that the security is a hidden danger according to the theory of impossible triangles. But because DAG is a relatively innovative and special structure, can it be more perfect to achieve security? This is not the case from the actual results.
5.1. Double flower problem
The characteristics of DAG asynchronous communication make it possible for a double-flower attack. For example, an attacker adds two conflicting transactions (double spending) at two different locations on the network, and the transactions are continuously forward-checked in the network until they appear on the verification path of the same transaction, and the network discovers the conflict. At this time, the common ancestor nodes that the two transactions are gathered together can determine which transaction is a double-flower attack.
If the trading path is too short, there will be a problem like "Blowball": when most transactions are "lazy" in extreme cases, only the early trading, the trading network will form a minority. Early transactions are the core central topology. This is not a good thing for DAGs that rely on ever-increasing transactions to increase network reliability.
Therefore, at present, for the double flower problem, it is necessary to comprehensively consider the actual situation for design. Different DAG networks have their own solutions.
5.2. Shadow chain problem
Due to the potential problem of double flowers, when an attacker can build a sufficient number of transactions, it is possible to fork a fraudulent branch (shadow chain) from the real network data, which contains a double flower transaction, and then this The branch is merged into the DAG network, and in this case it is possible for this branch to replace the original transaction data.
6. Introduction to the current improvement plan
At present, the project mainly guarantees safety by sacrificing the native characteristics of some DAGs.
The IOTA project uses the Markov chain Monte Carlo (MCMC) approach to solve this problem. The IOTA introduces the concept of Cumulative Weight for transactions to record the number of times the transaction has been cited in order to indicate the importance of its transaction. The MCMC algorithm selects the existing transactions in the current network as a reference for the newly added transactions by weighting the random weights of the accumulated weights. That is, the more referenced the transaction path, the easier it is to be selected by the algorithm. The walk strategy has also been optimized in version 1.5.0 to control the "width" of the transaction topology to a reasonable range, making the network more secure.
However, at the beginning of the platform startup, due to the limited number of participating nodes and transactions, it is difficult to prevent a malicious organization from sending a large number of malicious transactions through a large number of nodes to cause the entire network to be attacked by the shadow chain. Therefore, an authoritative arbitration institution is needed to determine the validity of the transaction. In IOTA, this node is a Coordinator, which periodically snapshots the current transaction data network (Tangle); the transactions contained in the snapshot are confirmed as valid transactions. But Coordinator doesn't always exist. As the entire network runs and grows, IOTA will cancel the Coordinator at some point in the future.
The Byteball improvement program features its design for the witness and the main chain. Because the structure of DAG brings a lot of transactions with partial order, and to avoid double flowers, it is necessary to establish a full order relationship for these transactions to form a transaction backbone. An earlier transaction on the main chain is considered a valid transaction.Witnesses, who are held by well-known users or institutions, form a main chain by constantly sending transactions to confirm other user transactions.
The above scheme may also bring different changes to the platform based on the DAG structure. Taking IOTA as an example, because of the introduction of Coordinator, the decentralization characteristics are reduced to some extent.
7. Actual operation
7.1. Positive effects
In addition to solving security problems, the above solutions can also solve the smart contract problem to some extent.
Due to the two potential problems caused by the native features of DAG: (1) The transaction duration is uncontrollable. The current mechanism for requesting retransmission requires some complicated timeout mechanism design on the client side, hoping for a simple one-time confirmation mechanism. (2) There is no global sorting mechanism, which results in limited types of operations supported by the system. Therefore, on the distributed ledger platform based on DAG technology, it is difficult to implement Turing's complete intelligent contract system.
In order to ensure that the smart contract can run, an organization is needed to do the above work. The current Coordinator or main chain can achieve similar results.
7.2. Negative effects
As one of the most intuitive indicators, DAG's TPS should theoretically be unlimited. If the maximum TPS of the IOTA platform is compared to the capacity of a factory, then the daily operation of TPS is the daily production of the plant.
For the largest TPS, the April 2017 IOTA stress test showed that the network had transaction processing capabilities of 112 CTPS and 895 TPS. This is the result of a small test network consisting of 250 nodes.
For the daily operation of TPS, from the data that is currently publicly available, the average TPS of the main network in the near future is about 8.2, and the CTPS (the number of confirmed transactions per second) is about 2.7.
The average average TPS of the test network is about 4, and the CTPS is about 3.
Data source discord bot: generic-iota-bot#5760
Is this related to the existence of Coordinator? Actual testing is needed to further demonstrate.
8. Measured analysis
The operational statistics of the open test network are related to many factors.For further analysis, we continue to use the IOTA platform as an example to build a private test environment for technical measurement analysis.
8.1. Test Architecture
The relationship between the components we built this test is shown below.
among them:
8.2. Testing the hardware environment
The server uses Amazon AWS EC2 C5.4xlarge: 16 core 3GHz, Intel Xeon Platinum 8124M CPU, 32GB memory, 10Gbps LAN network between servers, communication delay (ping) is less than 1ms, operating system is Ubuntu 16.04.
8.3. Test scenarios and results analysis

8.3.1. Default PoW Difficulty Value

Although there is no concept such as “miners”, the IOTA node still needs to prove the workload before sending the transaction to avoid sending a large number of transactions to flood the network. The Minimum Weight Magnitude is similar to Bitcoin. The result of PoW should be the number of digits of "9", 9 of which is "000" in the ternary used by IOTA. The IOTA difficulty value can be set before the node is started.
Currently for the production network, the difficulty value of the IOTA is set to 14; the test network is set to 9. Therefore, we first use the test network's default difficulty value of 9 to test, get the following test results.
Since each IOTA's bundle contains multiple transfers, the actual processed TPS will be higher than the send rate. But by executing the script that parses zmq, it can be observed that the current TPS is very low. Another phenomenon is that the number of requests that can be sent successfully per second is also low.
After analysis, the reason is that the test uses VPS, so in PoW, the CPU is mainly used for calculation, so the transaction speed is mainly affected by the transmission speed.

8.3.2. Decrease the PoW difficulty value

Re-test the difficulty value to 1 and get the following results.
As can be seen from the results, TPS will increase after the difficulty is reduced. Therefore, the current TPS of the IOTA project does not reach the bottleneck where the Coordinator is located, but mainly because of the hardware and network of the client itself that sends the transaction. The IOTA community is currently working on the implementation of FPGA-based Curl algorithm and CPU instruction set optimization. Our test results also confirm that we can continue to explore the performance potential of the DAG platform in this way.

8.3.3. Reduce the number of test network nodes

Due to the characteristics of DAG, the actual TPS of the platform and the number of network nodes may also be related. Therefore, when the difficulty value is kept at 1, the number of network nodes is reduced to 10 and the test is repeated to obtain the following results.
As can be seen from the results, as the number of nodes decreases, the actual processing of TPS also decreases, and is lower than the transmission rate. This shows that in a DAG environment, maintaining a sufficient size node will facilitate the processing of the transaction.
9. Reference materials
submitted by i0tal0ver to Iota [link] [comments]

Anyone bullish on XLNX?

There's a pretty interesting debate in the AI space right now on whether FPGAs or ASICs are the way to go for hardware-accelerated AI in production. To summarize, it's more about how to operationalize AI - how to use already trained models with millions of parameters to get real-time predictions, like in video analysis or complex time series models based on deep neural networks. Training those AI models still seems to favor GPUs for now.
Google seem to be betting big on ASICs with their TPU. On the other hand, Microsoft and Amazon seem to favor FPGAs. In fact Microsoft have recently partnered with Xilinx to add FPGA co-processors on half of their servers (they were previously only using Intel's Altera).
The FPGA is the more flexible piece of hardware but it is less efficient than an ASIC, and have been notoriously hard to program against (though things are improving). There's also a nice article out there summarizing the classical FPGA conundrum: they're great for designing and prototyping but as soon as your architecture stabilizes and you're looking to ramp up production, taking the time to do an ASIC will more often be the better investment.
So the question (for me) is where AI inference will be in that regard. I'm sure Google's projects are large scale enough that an ASIC makes sense, but not everyone is Google. And there is so much research being done in the AI space right now and everyone's putting out so many promising new ideas that being more flexible might carry an advantage. Google have already put out three versions of their TPUs in the space of two years
Which brings me back to Xilinx. They have a promising platform for AI acceleration both in the datacenter and embedded devices which was launched two months ago. If it catches on it's gonna give them a nice boost for the next couple of years. If it doesn't, they still have traditional Industrial, Aerospace & Defense workloads to fall back on...
Another wrinkle is their SoCs are being used in crypto mining ASICs like Antminer, so you never know how that demand is gonna go. As the value of BTC continues to sink there is constant demand for more efficient mining hardware, and I do think cryptocurrencies are here to stay. While NVDA has fallen off a cliff recently due to excess GPU inventory, XLNX has kept steady.

XLNX TTM P/E is 28.98
Semiconductors - Programmable Logic industry's TTM P/E is 26.48

submitted by neaorin to StockMarket [link] [comments]

Improved fork resilience proposal

Note: This develops ideas from my older proposal here: https://www.reddit.com/btc/comments/4vg4qf/proposal_to_increase_forksafety_while_reducing/
You do not need to have read the previous version though, since what I'm presenting here is improved along a number of dimensions, and spells out assorted details.

Design principles

This proposal is designed to meet the following goals:
  1. Bitcoin needs to fork now to increase the block size.
  2. It should be possible to fork Bitcoin without having ASIC miners on board before your fork.
  3. In a hypothetical world in which ASIC miners all stopped mining, Bitcoin (or one of its forks) ought to be able to continue producing blocks. (Genuine worry, see e.g.: http://www.truthcoin.info/blog/mining-heart-attack/)
  4. Nonetheless, the ASIC miners have built up an incredible infrastructure, providing unmatched security.
  5. It makes sense for Bitcoin forks to attempt to benefit from the security provided by the existing ASIC infrastructure.
If you disagree with these, there's probably not too much point arguing about the rest.

Meeting the design goals

To meet the design goals, producing blocks with an sha256(sha256(...)) PoW needs to remain possible. Similar reasoning has led people to propose a reduction in difficulty following the fork. I presume that if (say) a fork had signed up 20% of the hash power, then it would set its new difficulty to (around) 20% of the old difficulty. This seems risky though, as the reduction in difficulty would increase the risk of 51% attacks. (While the needed hash power for a 51% attack is the same regardless of the difficulty, with very low difficulty, blocks will arrive much faster, making it much harder to mitigate such attacks.) Additionally, in the event of a "mining heart attack" (a sudden drop in ASIC hash power), it is unlikely that a hard fork with reduced difficulty could be delivered fast enough to prevent a collapse in value.
In any case, following a fork, there is likely to be much higher variance in transaction times, as miners move between chains, and the difficulty adjustment algorithm struggles to keep up. People have proposed more responsive difficulty adjustment algorithms, but these produce problems in the longer term, including making certain attacks easier.
This suggests that an alternative approach is needed, namely one in which most blocks are produced using the standard PoW, but in an emergency, an alternative CPU mined PoW could take over. The idea of my proposal is to allow the commencement of mining of CPU mined blocks only after a certain time has elapsed, where the passing of time is measured by the production of timing blocks. In normal times, this reduces the variance of the time between blocks, thus reducing the variance of confirmation times, and making Bitcoin more reliable as a means of payment. In crisis times, such as after a fork or "mining heart attack", this enables CPU miners to produce blocks even when ASIC miners are not.

This proposal

I propose the introduction of two new block types. For clarity, I will call the existing blocks "type A blocks" (A for ASIC). "Type C blocks" (C for CPU) fulfil a similar function to type A blocks, but will be produced with a different algorithm. "Type T blocks" will be small blocks used for timing. Both type C and type T blocks will be CPU-mineable. I will now spell out the details of these new block types.
  • Type T blocks may follow either type A, C or T blocks, but no more than 60 type T blocks may be chained in a row.
  • Type T blocks contain a single coinbase transaction, and no other transactions.
  • Allowable coinbase transactions for type T blocks take as input the current block reward divided by 80.
  • The outputs of coinbase transactions from type T blocks are not spendable until followed by a type C block.
  • Type C blocks may only follow uninterrupted chains of 60 type T blocks.
  • Type C blocks contain a single coinbase transaction, and arbitrarily many other transactions (subject to the block size limit).
  • Allowable coinbase transactions for type C blocks take as input the current block reward divided by four, plus the sum of transaction fees from any included transactions.
  • Note that by construction, the total coinbase outputs of a run of 60 type T blocks and one type C block is 60/80+1/4 = 1 times the block reward, so there is no change to the total number of BTC being produced.
  • In counting blocks for difficulty adjustment, type T blocks are ignored. Thus the difficulty is adjusted after 2016 type A or C blocks since the last adjustment.
  • The new difficulty for type A blocks is adjusted as it is currently. ( new_difficulty = max( old_difficulty / 4, min( old_difficulty * 4, old_difficulty * ( two_weeks / time_since_last_adjustment ) ) ) )
  • The difficulty of a type T block (and hence a type C block) is set according to the formula new_difficulty = max( old_difficulty / 4, min( old_difficulty * 4, old_difficulty * ( two_weeks / time_since_last_adjustment ) * ( num_type_C_blocks / 100 ) ^ ( 1 / 2 ) ), where num_type_C_blocks is the number of type C blocks out of the last 2016 type A or type C. The implicit target here is 100 type C blocks per 2016, meaning a drop in ASIC miner profits of around 5%, which is hopefully not enough to overly annoy them. The slower adjustment to the number of type C blocks reflects the greater sampling variation in num_type_C_blocks and the fact that CPU power changes more slowly than ASIC power.
  • Note, that with roughly 5% of all profits going to CPU miners in normal times, type T block times should be around 30 seconds, and type C block times should be a bit less than 10 minutes. This is in line with my prior proposal, linked above.
  • Multiple low difficult "T" blocks are not equivalent to one higher difficulty block, because the variance of the time to produce N difficulty K blocks is lower than the variance of the time to produce one difficulty NK block. (Erlang vs exponential distributions.) The low variance of the time to produce 60 T blocks is what helps ensure that mining C blocks only starts after around 30 minutes, meaning that it only happens when ASIC miners have failed to produce A blocks for some reason.
  • The initial difficulty of producing type T and C blocks following the fork should be set so that in a hypothetical world in which (a) only one person CPU mined and (b) the price post-fork was equal to the price pre-fork, that one miner would exactly break even in expectation by CPU mining type T and C blocks on Amazon EC2, assuming that they obtained 5% of all block rewards. This is likely to be a substantial under-estimate of the true cost of CPU mining, due to people having access to zero (or at least lower) marginal cost CPU power, but an under-estimate is desirable to provide resilience post-fork.

Desirable properties

This proposal:
  • substantially reduces the variance of block times, increasing Bitcoin's use as a means of payment, and hence (probably) increasing its price,
  • encourages more people to run full nodes, due to the returns to CPU mining, increasing decentralization,
  • provides protection from sudden falls in ASIC hash rate, reducing tail risk of holding Bitcoin, and thus again (probably) increasing its price,
  • helps provide hash power post-fork, without driving away the existing miners and their hardware,
  • helps us deliver a block-size increase!
submitted by TomDHolden to btc [link] [comments]

A practical way to put miners back to use and back Bitcoin with compute power.

I've heard time and again "Why doesn't Bitcoin just do [insert useful computation here] computation to secure the network!"
Why not [email protected]?
Why not cloud computing?
Bitcoin verification must satisfy these properties:
That last one is important because if [email protected] was done, and then a cure for cancer was found, the value of bitcoin would crash. [email protected] doesn't satisfy the 2nd property anyway.
However, we can put miners back to good (profitable) use, and back Bitcoin value with computational power!
Amazon currently offers a cloud compute service which charges for it's use by the hour.
The Bitcoin network of verifiers currently represents the largest distributed computing network in the world.
If we build some software to distribute relatively arbitrary GPGPU computation to miners, and we build a service that offers this computation power to clients, then we can sell Bitcoin mining compute power.
Would this reduce the security of the Bitcoin network? Yes and no.
The security of the Bitcoin network is about to drop due to mining rigs turning off as a result of non-profitability.
We could have these miners turn their rigs back on for the cloud compute service. This wouldn't represent a loss of Bitcoin network security, because we have already lost this security.
If this service was only to accept Bitcoins, then the value of Bitcoins would be backed partly by computation power -- much in the way that it is currently backed partly by drug trade.
Some of you have misinterpreted this as a proposal for a new currency that is secured by arbitrary computation. I explained above why that would not be possible.
This is a proposal for a service that pools mining power for sale -- payable in Bitcoins. Ex-miners would contribute to the service, and be paid daily for their service.
The idea is to back the bitcoin economy with a new merchant service, using powerful equipment that we are about to stop using anyway.
Edit #2:
People keep bringing up centralization. As if this somehow centralizes the entire currency. Suggesting that this needs to be decentralized is as silly as suggesting that your local barestaurant needs to be "decentralized" before it can accept Bitcoins.
This is a merchant service -- not a currency!
submitted by kdoto to Bitcoin [link] [comments]

Someone please proofread what I have written below, so it can be posted on r/environment and other such subreddits.

Here is the title.
Bitcoin requires a tremendous amount of electricity to be maintain, but there are much more environmentally-friendly, alternative cryptocurrencies. Please demand that merchants accept the environmentally friendly alternatives.
Executive Summary:
Large Bitcoin mining operations are now being constructed in places where they unnecessarily squander the least expensive, renewable hydro-electric and geothermal-electric resources. There are very environmentally-friendly, readily-available, alternative cryptocurrencies such as Blackcoin and NXT that do not pose a threat to these precious renewable resources.
The environmentally unfriendly coins that require a lot of electricity to mine are called Proof of Work (PoW) coins. The environmentally friendly alternatives like Blackcoin and NXT are called Proof of Stake (PoS) coins.
The price spike in bitcoin last fall has led to an arms race to adopt electricity-gobbling, specialized mining equipment in the pursuit of corporate mining profits. They were not required to maintain Bitcoin prior to their invention. Specialized mining equipment for a second class of coins, which are similar to Litecoin, another PoW coin, is about to start shipping. This will lead to another large surge in unnecessary corporate mining operations and greatly increase the electrical demand in the race for corporate mining profits.
You can read the long report below.to find out more about the issue, and you can visit blackcoin and NXT at the links below to find out more about our coins. If you have heard enough and just want to do something quick and simple to support our efforts, visit blackcoin and NXT, click on our subscriber button to show your support, and then watch us take on Bitcoin. While you visit the two subreddits, you can judge for yourself which one you think will succeed.
Hopefully, some respected environmentalist will start campaigns to get merchants that already accept Bitcoin and Litecoin to start accepting the environmentally friendly alternatives.
Electrical requirement to mine PoW coins:
The Bitcoin, Litecoin, and Dogecoin ledger are maintained by miners who compete against each other to see who can first find the next page for their blockchains. Only the miner that wins the race for each ledger page gets paid in coins.
As a result of this competition and the late 2013 price spike, Bitcoin mining corporate startups are popping up in central Washington State as documented in the first link below to take advantage of the inexpensive, renewable hydro-electricity and in Iceland as documented in the second link below to take advantage of the renewable hydro and geothermal resources. If bitcoin continue to expand, it will unnecessarily eat up more and more of these valuable renewable resources
Link to Washington State Bitcoin mining article
Link to Iceland Bitcoin mining article
The next surge in electricity requirement is about to happen:
The mining hardware manufacturers are about to start shipping specialized mining equipment that can only mine the Litecoin and Dogecoin type of PoW coins as documented in the two links below. This new front in the mining arms race will gobble up much more precious renewable electricity in the competitive pursuit of corporate mining profits than is currently required to update the ledgers of these coins.
This specialized equipment is prostituting the original bitcoin promise
The tremendously profitable mining of crypto coins that are competitively produced is unnecessarily prostituting the original concept of their inventor, Satoshi Nakamoto. He envisioned bitcoins as being mined on standard personal computers while preforming other useful tasks. Instead, special computer hardware is being manufactured costing upwards to $10,000 apiece which can perform only one task. These individual units are being racked up in warehouses now.
This specialized equipment was not required for Bitcoin prior to its invention and is not required currently for Litecoin and Dogecoin. However, it is coming anyway producing an unnecessary arm race in the pursuit of corporate profits.
These specialized dev ices are energy inefficient in a second ways.
These specialized devices generate so much heat that they require elaborate energy-intensive cooling system for large operation. One of the most elaborate of these cooling systems is documented in the link below for a Hong Cong corporate mining operation that emerges the energy-wasting equipment in boiling goo to keep it cool. Thus, not only does it take electricity to run the equipment for these large operations, but more to keep it all cool.
PoS coins are the environmentally friendly alternative.
In contrast, the ledger pages of coins like Blackcoin and NXT are generated by stakeholders who cooperate to perform the task which are being done on standard multitasking computers. Many of these computers would be running anyway as originally envisioned by the inventor of the blockchain.
If you have heard enough and just want to do something quick and simple to support our efforts, visit blackcoin and NXT, click on our subscriber button to show your support, and then watch us take on Bitcoin. While you visit the two subreddits, you can judge for yourself which one you think will succeed.
Hopefully, some respected environmentalist will start campaigns to get merchants that already accept Bitcoin and Litecoin to start accepting the environmentally friendly alternatives.
submitted by RJSchex to blackcoin [link] [comments]

Thanks, /r/dogecoin! Much Rock. Such Community. Wow.

I've been a closet crypto enthusiast for a while now. On a whim I stumbled into this community... and dove in with both feet.
Now, I've got 2 Amazon EC2 instances mining dogecoins, I've transferred all my bitcoins over (a paltry amount, for sure), and I've written two articles on altcoins and their future:
And it's you guys and your amazing giving spirit that has inspired me to jump in with my free time and even some cold hard cash to bring down the miners!
I'm an open source software developer (WordPress) and you guys remind me a lot of that community - giving, sharing, and caring.
So, keep up the good work, and let's take this TO THE MOON!
submitted by studionashvegas to dogecoin [link] [comments]

Problems mining with Amazon

When I try to set up a miner through Amazon EC2 I keep getting "Requesting Spot Instance Failure". What am I doing wrong? I've looked through multiple guides, now mainly trying this one: http://grantammons.me/bitcoin/using-amazon-ec2-to-mine-dogecoin/
submitted by ICallItFutile to dogemining [link] [comments]

Dogecoin mining using AWS EC2 GPU instances g2.2xlarge Electroneum: Mobile Cryptocurrency Mining! What’s the Best Bitcoin Miner to buy in 2020? - YouTube USB Bitcoin Miner - The Power of 1000's Computers - YouTube Amazon EC2 Spot Instances: Are you Spot Ready? - YouTube

Bitcoin mining on the cloud without an ASIC miner does not yield any profit. Still, it’s a fun experiment. Step One: Get cloud hosting. a) Sign up for AWS. First things, first: sign up for a ... I'm relatively new to Bitcoin, and am playing around with pool mining for fun. I have access to a fair amount of EC2 instances - c1.mediums - between 30 and 50 on a given day. I already have the 'minerd' CPU miner working on a single instance, connecting to a mining pool service. Two questions about this setup: I kept meaning to add an update to this. I just wanted to share an easy and decent example of using EC2. Bitcoin mining with EC2 isn’t profitable because the GPUs are not that powerful in comparison to what you can buy off the shelf. Like Like. Reply. alex says: April 21, 2013 at 3:38 pm The ufasoft miner still supports CPU mining, I get about 15MH/s on my i7. Like Like. Reply. atc says ... Now EC2 (compute cluster) does use GPUs but NVidia GPUs which don't perform at a fraction of the level of AMD GPUs for the specific purpose of Bitcoin mining. Although it was not analyzed recently, a calculation from a while ago provides numbers in the right ballpark, if not better than they would be today. In that study costs you more than ... Bitcoin Mining on Amazon EC2 This isn’t profitable; if you do this, stop after a while because you can get a better US Dollar to Bitcoin exchange rate basically anywhere else. You might think it’s a good idea to learn how to use Amazon EC2 at some point, especially with their big juicy GPU Computing instances.

[index] [44768] [15897] [30750] [43506] [8800] [27236] [13941] [7157] [30123] [48577]

Dogecoin mining using AWS EC2 GPU instances g2.2xlarge

Learn more about Amazon EC2 Spot Instances at - https://amzn.to/2OkVPNG. Amazon EC2 Spot Instances offer spare compute capacity available in the AWS cloud at... Articles and hashtags referenced (I DO NOT OWN Or CLAIM TO OWN ARTICLES REFERENCED OR VIEWED IN VIDEOS): https://www.amazon.com/GekkoScience-NewPac-Efficient... Inside a Secret Chinese Bitcoin Mine ... Pyrit on amazon EC2 dual Tesla GPU instance - Duration: 6:14. RockTouching 12,387 views. 6:14. Dogecoin Mining Tutorial - Fast and Easy! - Duration: 10:32 ... What is the best bitcoin miner to buy in 2020? In this video, we’ll find out by comparing profitability as well as other factors. Mining bitcoin doesn’t have... How to mine Bitcoin with freebitcoin using Ubuntu instance running on AWS.