Quantcast
Channel: DataStax Support Forums » Forum: - Recent Topics
Viewing all 74 articles
Browse latest View live

upant on "Bulk-loading error in DataStax AMI"

$
0
0

Hi,

I am having problem running bulk-loading using SSTableSimpleUnsortedWriter class in DataStax AMI (with default configurations). I tried bulk-loading example at http://www.datastax.com/dev/blog/bulk-loading.

Here is what I did:
- Created 3 nodes cluster by following instructions at http://www.datastax.com/docs/datastax_enterprise3.0/install/install_dse_ami (DataStax AMI).
- Created keyspace and column families as mentioned in the example and was able to write/read data using CLI.
- Created a Java class using example code at http://www.datastax.com/wp-content/uploads/2011/08/DataImportExample.java.
- While compiling the class, I got an error. The code listed in the example doesn't have partitioner argument in SSTableSimpleUnsortedWriter constructor. So, passing RandomPartitioner the constructor fixed the error.
- While running the code, I got following error:
Error instantiating snitch class 'com.datastax.bdp.snitch.DseDelegateSnitch'.
Fatal configuration error; unable to start server.

However, the same code in Apache Cassandra (non-DataStax distribution) runs perfectly without any issue or any additional configuration.

Is there any additional configuration needed to make it working in DataStax Cassandra or it is a bug? Has anybody tried sstableloader in DSE?

Thanks in advance,
Uddhab


achillean on "OpsCenter Touch support bug + fix"

$
0
0

I encountered an issue where the OpsCenter wouldn't load in the browser because 2 CSS files were missing. Turns out that I was using a touch-capable device and Dojo was trying to load additional features for a device that's capable of handling touch. Unfortunately, those files didn't exist in my OpsCenter folder (specifically dgrid). Anyways, to fix the problem all I had to do was download the following files and put them in the relevant directories (/usr/share/opscenter/content/js/dojotoolkit/dgrid/):

https://raw.github.com/SitePen/dgrid/master/css/TouchScroll.css
https://raw.github.com/SitePen/dgrid/master/css/has-transforms3d.css
https://raw.github.com/SitePen/dgrid/master/util/has-css3.js

Thought I'd share the solution in case anybody else encounters this!

Cheers

Anonymous on "Server side scripting support"

blair on "Source debian package"

$
0
0

Hello,

I'm working on a project that is in development now that needs locking or CAS. It's scheduled to finish around the time that Cassandra 2.0 reaches beta and instead of using locking I'd like to use the new CAS support. I'd like to deploy Cassandra from git's trunk using packages on our Ubuntu systems instead of manually deploying it.

Can DataStax put up the source package http://debian.datastax.com/community/ ? That would be very useful instead of me taking the time to deploy Cassandra directly or repackage it myself.

Thanks,
Blair

nyadav.ait on "java datastax driver EXCEPTION No handler set for stream 0"

$
0
0

i had started using latest cassandra-driver-core-1.0.1.jar from yesterday on latest version of cassandra 1.2.6...i cross checked start_native_transport: true is set in yaml...also my cassandra is configured with rpc_address and listen_adress as computer host name....and with same name i am connected in Client....but it shows this message and after that hangs at .build(); ...

i had also cross checked i had taken all jars i have are as per http://www.datastax.com/documentation/developer/java-driver/1.0/java-driver/reference/settingUpJavaProgEnv_r.html

and i am using JDK 1.6...

Here is message i got :

Jul 17, 2013 11:20:37 AM com.datastax.driver.core.Connection$Dispatcher messageReceived SEVERE: [mlhwlt08/192.168.2.111-1] No handler set for stream 0 (this is a bug, either of this driver or of Cassandra, you should report it). Received message is ROWS [peer(system, peers), org.apache.cassandra.db.marshal.InetAddressType][data_center(system, peers), org.apache.cassandra.db.marshal.UTF8Type][rack(system, peers), org.apache.cassandra.db.marshal.UTF8Type][tokens(system, peers), org.apache.cassandra.db.marshal.SetType(org.apache.cassandra.db.marshal.UTF8Type)][rpc_address(system, peers), org.apache.cassandra.db.marshal.InetAddressType]

| 192.168.2.109 | datacenter1 | rack1 | 000100142d37353634343931333331313737343033343435 | 192.168.2.109
| 192.168.2.108 | datacenter1 | rack1 | 0001000130 | 192.168.2.108

Please help me resolving this problem...

plb5w545 on "飘落的孤叶"

$
0
0

<div id="aid" class="content article"><p>  无限漫长时光里的安静,无限安静里的漫长时光,望着在眼前匆匆掠过,如同电影的编程,在脑海里浮现,浮现每一个瞬间,把的教室注满,思绪无限。被窗外那黑色的无言衬托得无法呼唤,而那凄凉的倩影,随依然,飘落的孤叶,勾起心中再慢落寞的忧伤,显得格外惆怅,惆怅又悲凉...</p><p>  纷漫步在这熟悉而又陌生的城市,身旁的小树被风儿吹打着,随着它的旋律一起分享,当徘徊过去,小树依然伫立在街头独享忧伤。</p><p>  可能是我的性格比较消极吧,只要是那些忧伤而的,我都喜欢,而却不忍留念。 陪去ATM,我们站在台阶上,天空似乎发出它内心的不满,变得异常灰暗,indian hair,简直要把整个世界吞亡! 风而不再是往日的风儿,在此刻是如此的清冷,吹起湿路面上的,分不清是吹落的还是飘起的,片刻,满天枯黄的叶片,飞舞不断... 刚刚落地而又被吹起,而被吹起的却又被不知不觉中吹得更远,直至飘到远方望不尽的灰色。任凭雨的滴落,紧紧地追寻属于它的最终归宿,风吹动雨滴的场面,斜斜的倾斜在我们每个人的身上,更落进了内心底的深处。 就连对面的那个小面馆的帷帐,大大的护着小桌子小椅子的遮阳伞也被风吹刮得老远...</p><p>  落寞在台阶上的我们,早已忘却了身处何方,呆呆地伫立在此地望着眼前的凄美,忘记了雨的本色,Brazilian hair,只顾的、感受,甚至早已被深深的陶醉在不知何时才会醒的梦里,伫立, 依然孤寂... 直至一辆汽车奔驰而过,销毁了,销毁了我的思想,和散发的芬芳。 望着街上每一个人的脸上挂着的雨都像是泪水,我看见每一个奔跑的人都感觉他们是在逃亡,而我抬头望着天空,异常灰暗,就感觉是世界末日.</p><p>  当灰暗的天空逐渐明亮,当风儿的旋律变得,当被吹起的枯叶飘到那无人的角落,我,的执着告诉我要把这一美丽的时刻记录,可是,我不会雕刻时光,只有望</p><p>  着时间从我眼前逝去...</p><p>  我们决定去吃点东西吧,只是天空不愿意,风儿不情愿,枯叶也依然不断。只能这场悲凉之雨的过去?朋友不愿意,思想也不允许。 唯一不变的是前方那灰色空间,悲凉的风,凄凉的雨,独自享受着这一番美,倍感珍惜!</p><p>  奔跑在这似乎永不停休的街头,抬头仰望天空,那滴滴的雨水早已把我充斥得没有了空间,没有想太多,只想找个落座的地方,向着逆风的那个面馆跑去。 坐在小屋里,能感到确实的,手里的筷子却不知何从起,何从落了。 在享受着那辣辣的滋味的同时,头上的织板上却还不断地有雨水滴落的声音,望着门外的,在风雨中奔跑,来来回回的行人,竟是如此依然...</p><p>  同学提议说去买彩票吧,今天运气这么差,说不准真能中呢!</p><p>  我一笑而过。</p><p>  正在这寂静的瞬间,一声刺耳的声音,哦,</p><p>  下课了~~~</p><p>  可是,雨,hair extension,何时停呢?</p>  《那一刻、倾听...》来自,转载请注明网址和作者!如果您喜欢《那一刻、倾听...》,别忘记推荐给您的朋友哟!</div>

Anonymous on "Bug Report: Searching non-existant columns in SOLR"

$
0
0

Searching for a non-existent column in SOLR results in an rpc_timeout. My assumption is that there should be an error message or something telling you why the search didn't work.

Using the standard wikipedia demo, the following happens:

cqlsh:wiki> select * from solr where solr_query = 'foo:bar';
Request did not complete within rpc_timeout.

MorgeAlter on "Steel Boned Corsets Can Be Worn Either As Under or Outerwear"

$
0
0

Sexy Corsets and Bustier are the type of lingerie that are called foundation garments. I was searching online recently for some online stores that sell affordable and great looking plus-size lingerie for some up-coming boudoir photo shoots.

One of the many advantage of Chemise Corsets Sale is that they can be worn either as under or outerwear. If the woman wants her clothing to fit perfectly, she may choose to wear it as an undergarment. If our customers love them as much as we do, we will be adding several more styles to the lineup. As an undergarment, it can make a woman feel extremely sexy. As outerwear, it can be worn in such a way that it will cinch the waist and highlight all of her curves. Many individuals like these items because they are luxurious, look amazing, and are guaranteed to arouse your mate. They are the ideal choice for those women who have a larger waist and want to make it look smaller and sexier. The scene became physical when Waite, who says she was reaching for the door, brought her arm closer to one of America's top Vera lingerie chains such as Victoria's Secret models are known. It can also go with skirts and under the suit jacket for an evening outing or a professional, yet sexy, business look. Consider this when determining how you will be wearing your corset. If you plan to wear it underneath your clothing, the underbust will not show as much as the overbust models.

You can also pair a decorative laced bustier with jeans or skirts for an extremely sexy look. With an extraordinary necklace and earrings, you are ready for a fun and sexy night!


ZCSHEN on "bug for ODBC connector?"

$
0
0

Hi,

I suspect this is a bug for the ODBC connector. When I using CQL directly querying the database, it returns

1971-09-13 00:00:00Mountain Daylight Time

However, when I got it from excel, it becomes
9/13/1971 6:00:00 AM

Both my server and machine are under mountain time.

I have a feeling it's become Zulu time automatically?

tambalavanar on "Java code to perform CFS file operations from remote system is not working"

$
0
0

I'm writing a java program to read & write files to CFS from a remote system (non DSE machine). As suggested in the DataStax Site, I wrote the following piece of code.


import java.net.URI;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.*;
import org.apache.hadoop.security.UserGroupInformation;

import com.datastax.bdp.hadoop.cfs.CassandraFileSystem;

public class CassandraFileHelper {

public static void main(String[] args) throws Exception {

FSDataOutputStream o = null;
CassandraFileSystem cfs = null;
String content = "some text content..";

try {
System.setProperty("cassandra.config", "conf/cassandra.yaml");
System.setProperty("dse.config", "conf/dse.yaml");

Configuration conf = new Configuration();
conf.addResource(new Path("conf/core-site.xml"));

UserGroupInformation.createUserForTesting("unixuserid", new String[] { "usergroupname" });
UserGroupInformation.setConfiguration(conf);

cfs = new CassandraFileSystem();
cfs.initialize(URI.create("cfs://hostname:9160/"), conf);

o = cfs.create(new Path("/folder/testfile.txt"), true);
o.write(content.getBytes());
o.flush();

} catch (Exception err) {
System.out.println("Error: " + err.toString());
} finally {
if (o != null)
o.close();
if (cfs != null)
cfs.close();
}
}
}


I've included the listed configuration files and jar from DSE package.

  • cassandra.yaml
  • core-site.xml
  • dse.yaml
  • cassandra-all-1.0.10.jar
  • cassandra-clientutil-1.0.10.jar
  • cassandra-thrift-1.0.10.jar
  • commons-cli-1.1.jar
  • commons-codec-1.2.jar
  • commons-configuration-1.6.jar
  • commons-lang-2.4.jar
  • commons-logging-1.1.1.jar
  • compress-lzf-0.8.4.jar
  • dse.jar
  • guava-r08.jar
  • hadoop-core-1.0.2-dse-20120707.200359-5.jar
  • libthrift-0.6.1.jar
  • log4j-1.2.16.jar
  • slf4j-api-1.6.1.jar
  • snakeyaml-1.6.jar
  • snappy-java-1.0.4.1.jar
  • speed4j-0.9.jar

When I run the program, I get the following error

org.apache.thrift.TApplicationException: Internal error processing batch_mutate

I copied all the config files from a DSE machine and when I added them I get the following error.

Cannot locate conf/cassandra.yaml
Fatal configuration error; unable to start server. See log for stacktrace.

Could anyone please validate my approach and let me know whether this is possible?
Thanks.

bryan on "3 node solr cluster, why does 1 node get higher load ?"

$
0
0

I have a 3 node solr datastax cluster and if I run the following apache bench curl against node3, I see higher load against node1 by a lot.

ab -k -c 100 -n 10000 "http://node3:8983/solr/test.solr/select?q=*%3A*&wt=xml&indent=true"

It doesn't matter what node I query in the cluster, node1 always has higher cpu load? Can this be explained?

Sven on "ANNOUNCEMENT: This forum has been moved to stackoverflow.com / serverfault.com"

$
0
0

In an effort to consolidate free help offered for our products we have decided to move these forums to a more widely used forum. Please use one of the following queries (or any combination):

- http://stackoverflow.com/questions/tagged/cassandra for tag search or http://stackoverflow.com/search?q=cassandra for plain text search
- http://stackoverflow.com/questions/tagged/datastax-enterprise for tag search or http://stackoverflow.com/search?q=datastax for plain text search
- http://serverfault.com/questions/tagged/datastax-opscenter for tag search or http://serverfault.com/search?q=opscenter for plain text search

We also suggest subscribing to the Cassandra users list linked at the bottom of http://cassandra.apache.org/ and visiting http://http://planetcassandra.org/.

We will continue to monitor the forums until the end of the month, at which point they will be switched to read only. The content will remain available, so search engines and other existing links still work.

Thanks,
Sven Delmas

Sven on "ANNOUNCEMENT: This forum has been moved to stackoverflow.com / serverfault.com"

$
0
0

In an effort to consolidate free help offered for our products we have decided to move these forums to a more widely used forum. Please use one of the following queries (or any combination):

- http://stackoverflow.com/questions/tagged/cassandra for tag search or http://stackoverflow.com/search?q=cassandra for plain text search
- http://stackoverflow.com/questions/tagged/datastax-enterprise for tag search or http://stackoverflow.com/search?q=datastax for plain text search
- http://serverfault.com/questions/tagged/datastax-opscenter for tag search or http://serverfault.com/search?q=opscenter for plain text search

We also suggest subscribing to the Cassandra users list linked at the bottom of http://cassandra.apache.org/ and visiting http://http://planetcassandra.org/.

We will continue to monitor the forums until the end of the month, at which point they will be switched to read only. The content will remain available, so search engines and other existing links still work.

Thanks,
Sven Delmas

turif on "How can we enable hdfs:// and cfs:// too?"

$
0
0

Hi,

Our DSE filesystem is available using the cfs protocol like cfs://localhost but hdfs:// is not available.
Can we turn on hdfs:// too?
As I know CFS is fully compatible with Hadoop FS so I guess both of protocols should work in the same time.

The following command is ok:

dse hadoop fs -fs cfs://hostname/ -ls

But

dse hadoop fs -fs hdfs://hostname/ -ls
13/09/06 07:40:08 INFO ipc.Client: Retrying connect to server: hostname:8020. Already tried 0 time(s).

dse-core-default.xml contains the followings:

<property>
<name>cassandra.client.transport.factory</name>
<value>com.datastax.bdp.transport.client.TDseClientTransportFactory</value>
</property>
<property>
<name>fs.cfs-archive.impl</name>
<value>com.datastax.bdp.hadoop.cfs.CassandraFileSystem</value>
</property>
<property>
<name>fs.cfs.impl</name>
<value>com.datastax.bdp.hadoop.cfs.CassandraFileSystem</value>
</property>
<property>
<name>fs.default.name</name>
<value>cfs://hostname</value>
</property>

Can I add multiple value for this fs.default.name property?

Thanks,

Ferenc

neverforever on "Extracting data from a huge column family"

$
0
0

I'm working on a project where we have stored a large amount of data within a column family. Upwards of 3billion columns in thousands of rows. When attempting to extract this data and drop it in a file to Amazon S3, we've tried typical big data tools like Hive and Pig. However when accessing the data in CFS or CassandraStorage, it's consistently running into timeout exceptions.

I'm interest in knowing if anyone has come across this type of problem what are some optimization possibilities that may resolve the issue or if there are additional tools to consider that may be better suited for this task.

Thanks


ken.hancock@schange.com on "Off-Heap Solr leak during repair?"

$
0
0

I'm following the recommendations to do sub-range repairs so as to not overly reindex excess documents when the merkle trees reach their max size. In order to do this, I've modified a script provided my Matt Stump. My copy of the script is up on github: https://github.com/hancockks/cassandra_range_repair

My cluster is set up as follows:

Datacenter: Solr
==========
Address Rack Status State Load Owns Token
7378697629483820641
192.168.131.224 rack1 Up Normal 6.95 GB 10.00% -9223372036854775800
192.168.131.233 rack1 Up Normal 8.07 GB 10.00% -7378697629483820647
192.168.131.245 rack1 Up Normal 9.83 GB 20.00% -3689348814741910325
192.168.131.227 rack1 Up Normal 13.13 GB 10.00% -1844674407370955164
192.168.131.195 rack1 Up Normal 17.69 GB 10.00% -3
192.168.131.192 rack1 Up Normal 10.4 GB 10.00% 1844674407370955158
192.168.131.191 rack1 Up Normal 9.04 GB 10.00% 3689348814741910319
192.168.131.194 rack1 Up Normal 6.6 GB 10.00% 5534023222112865480
192.168.131.196 rack1 Up Normal 12.29 GB 10.00% 7378697629483820641

As a test, I ran a repair on .245 using it's own partition range in 1000 steps with Solr turned off. I monitored Linux free memory on .195 which is part of .245's replication group.

I then turned Solr back on and restarted the cluster. I ran a repair on .227 using it's own partition range in 1000 steps. I monitored Linux free memory on the same .195 node which is still part of .227's replication group.

http://s937.photobucket.com/user/ken_hancock_schange/library/DSE%20Suspected%20Memory%20Leak

Memory constantly declines and never is returned. Eventually the node runs out of memory dies in random ways. In the worst case, it gets an out of memory trying to spawn "df" for disk free space check and interprets the OutOfMemory as no disk space and then the node automatically tries to leave the cluster, streaming the data elsewhere.

ERROR [pool-24-thread-1] 2013-09-09 17:46:15,861 DiskHealthChecker.java (line 62) Error in finding disk space for directory /var/data/cassandra/data
java.io.IOException: Cannot run program "df": java.io.IOException: error=12, Cannot allocate memory
at java.lang.ProcessBuilder.start(ProcessBuilder.java:460)
at java.lang.Runtime.exec(Runtime.java:593)
at java.lang.Runtime.exec(Runtime.java:466)
at org.apache.commons.io.FileSystemUtils.openProcess(FileSystemUtils.java:535)
at org.apache.commons.io.FileSystemUtils.performCommand(FileSystemUtils.java:482)
at org.apache.commons.io.FileSystemUtils.freeSpaceUnix(FileSystemUtils.java:396)
at org.apache.commons.io.FileSystemUtils.freeSpaceOS(FileSystemUtils.java:266)
at org.apache.commons.io.FileSystemUtils.freeSpaceKb(FileSystemUtils.java:200)
at org.apache.commons.io.FileSystemUtils.freeSpaceKb(FileSystemUtils.java:171)
at com.datastax.bdp.util.DiskHealthChecker.checkDiskSpace(DiskHealthChecker.java:52)
at com.datastax.bdp.util.DiskHealthChecker.checkDiskSpace(DiskHealthChecker.java:71)
at com.datastax.bdp.util.DiskHealthChecker.checkDiskSpace(DiskHealthChecker.java:71)
at com.datastax.bdp.util.DiskHealthChecker.checkDiskSpace(DiskHealthChecker.java:71)
at com.datastax.bdp.util.DiskHealthChecker.checkDiskSpace(DiskHealthChecker.java:71)
at com.datastax.bdp.util.DiskHealthChecker.checkDiskSpace(DiskHealthChecker.java:71)
at com.datastax.bdp.util.DiskHealthChecker.checkDiskSpace(DiskHealthChecker.java:71)
at com.datastax.bdp.util.DiskHealthChecker.access$000(DiskHealthChecker.java:18)
at com.datastax.bdp.util.DiskHealthChecker$DiskHealthCheckTask.run(DiskHealthChecker.java:104)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
at java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:317)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:150)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$101(ScheduledThreadPoolExecutor.java:98)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.runPeriodic(ScheduledThreadPoolExecutor.java:181)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:205)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:619)
Caused by: java.io.IOException: java.io.IOException: error=12, Cannot allocate memory
at java.lang.UNIXProcess.<init>(UNIXProcess.java:148)
at java.lang.ProcessImpl.start(ProcessImpl.java:65)
at java.lang.ProcessBuilder.start(ProcessBuilder.java:453)
... 26 more
INFO [pool-24-thread-1] 2013-09-09 17:46:15,883 DiskHealthChecker.java (line 82) Removing this node from the ring for the disk is close to FULL
INFO [pool-24-thread-1] 2013-09-09 17:46:15,912 EndpointStateTracker.java (line 139) Endpoint /192.168.131.245 state changed STATUS = LEAVING,-3689348814741910325

shravanww on "Datastax Hadoop Usage"

$
0
0

Hello All,

I am working on a solution to use Datastax Hadoop for some processing and Datastax Cassandra for storage. But we are going to use Hortonworks as our Hadoop platform. Is it possible to use the Datastax Hadoop (Apache based) even though the production Hadoop system is going to be Horton works? Any help is highly appreciated as I am going to present my solution tomorrow.

Problem:We have some complex derivation logic for few columns exists in Teradata procedures. My solution is to setup a datastax cluster which comprises of group of analytical nodes on which Datastax Hadoop is installed and then derive the existing logic using Hive. The Other nodes will be of cassandra File system and store the resultant derived column values from Hive.

Since we have the Hortonworks as our Hadoop platform can I suggest my above solution? Please help.
Thanks &Regards,
Shravan

shravanww on "Urgent Help Please: Sqoop Export with Cassandra?"

$
0
0

Hello All,

I need a quick info please. Is there a way Sqoop Export extracts Data from Cassandra and produce Output files?
Please help :)Thanks in advance!

Thanks,
Shravan

bwong64 on "Opscenter Agent not connecting to Opscenter"

$
0
0

I also tried adding the public address of the cassandra node into the agent's address.yaml:

local_interface: "public ip"

And I restarted the opscenterd process and then the agents.

That gave the same result...

junker on "OpsCenter 3.2.2 shows NO DATA for io and network performance metrics"

$
0
0

OpsCenter 3.2.2 shows NO DATA for io and network performance metrics, all other metrics - ok.
All nodes running Debian 7. Same setup, but on Debian 6 works fine.
I see, that iostat -x output on Debian 7 differs from Debian 6, maybe there is problem?

Debian 6:
# iostat -x
Linux 2.6.32-5-amd64 (old-node) 09/09/2013 _x86_64_ (8 CPU)

avg-cpu: %user %nice %system %iowait %steal %idle
7.50 0.29 1.22 2.07 0.00 88.92

Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util
sda 0.06 42.20 3.64 19.88 195.90 496.73 29.45 0.01 27.57 8.73 20.53
sdb 0.00 8.79 0.89 0.81 181.86 185.08 216.26 0.34 197.65 3.56 0.60
sdc 0.00 8.95 0.98 0.76 187.83 180.02 210.58 0.37 212.20 3.50 0.61
sdd 0.00 8.74 0.98 0.78 186.84 180.49 209.73 0.33 186.00 3.72 0.65

Debian 7:
# iostat -x
Linux 3.10.9-xxxx-grs-ipv6-64 (node1) 09/09/13 _x86_64_ (8 CPU)

avg-cpu: %user %nice %system %iowait %steal %idle
12.33 2.25 3.19 4.06 0.00 78.17

Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sda 0.00 17.60 0.31 35.30 34.01 372.56 22.84 0.41 11.53 1.25 11.62 11.26 40.08
sdb 0.21 0.62 8.76 6.65 999.01 1288.72 296.80 1.05 68.31 24.58 125.88 9.03 13.91

And OpsCenter agent log:
# cat /var/log/opscenter-agent/agent.log
ERROR [os-metrics-8] 2013-09-08 20:17:29,099 Long os-stats collector failed: /proc/net/dev (No such file or directory)
ERROR [os-metrics-9] 2013-09-08 20:17:39,099 Long os-stats collector failed: /proc/net/dev (No such file or directory)
ERROR [os-metrics-8] 2013-09-08 20:17:49,100 Long os-stats collector failed: /proc/net/dev (No such file or directory)
ERROR [os-metrics-9] 2013-09-08 20:17:59,100 Long os-stats collector failed: /proc/net/dev (No such file or directory)
ERROR [os-metrics-8] 2013-09-08 20:18:09,100 Long os-stats collector failed: /proc/net/dev (No such file or directory)
INFO [install-location-finder] 2013-09-08 20:18:10,783 New JMX connection (127.0.0.1:7199)
INFO [conf-requester] 2013-09-08 20:18:18,106 Requesting latest conf from opscenterd
ERROR [os-metrics-9] 2013-09-08 20:18:19,101 Long os-stats collector failed: /proc/net/dev (No such file or directory)
ERROR [os-metrics-5] 2013-09-08 20:18:29,101 Long os-stats collector failed: /proc/net/dev (No such file or directory)
ERROR [os-metrics-2] 2013-09-08 20:18:30,796 Long os-stats collector failed
java.lang.NullPointerException
at clojure.lang.RT.doubleCast(RT.java:1222)
at opsagent.util$nan_QMARK_.invoke(util.clj:141)
at opsagent.rollup$add_value.invoke(rollup.clj:156)
at opsagent.rollup$process_keypair$fn__511.invoke(rollup.clj:234)
at opsagent.cache$update_cache_value_default$fn__405$fn__406.invoke(cache.clj:23)
at clojure.lang.AFn.applyToHelper(AFn.java:161)
at clojure.lang.AFn.applyTo(AFn.java:151)
at clojure.lang.Ref.alter(Ref.java:174)
at clojure.core$alter.doInvoke(core.clj:2244)
at clojure.lang.RestFn.invoke(RestFn.java:425)
at opsagent.cache$update_cache_value_default$fn__405.invoke(cache.clj:23)
at clojure.lang.AFn.call(AFn.java:18)
at clojure.lang.LockingTransaction.run(LockingTransaction.java:263)
at clojure.lang.LockingTransaction.runInTransaction(LockingTransaction.java:231)
at opsagent.cache$update_cache_value_default.invoke(cache.clj:22)
at opsagent.rollup$process_keypair.invoke(rollup.clj:234)
at opsagent.rollup$process_metric_map.invoke(rollup.clj:240)
at opsagent.os.collection$start_os_stat_collection$send_metric__4987.invoke(collection.clj:80)
at opsagent.os.linux_metrics$sendmap.invoke(linux_metrics.clj:12)
at opsagent.os.linux_metrics$report_iostats.invoke(linux_metrics.clj:244)
at opsagent.os.linux_metrics$collectors$wrap_long_collector__2840$fn__2841.invoke(linux_metrics.clj:269)
at opsagent.os.collection$start_pool$fn__4944.invoke(collection.clj:34)
at clojure.lang.AFn.run(AFn.java:24)
at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)
at java.util.concurrent.FutureTask$Sync.innerRunAndReset(Unknown Source)
at java.util.concurrent.FutureTask.runAndReset(Unknown Source)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$101(Unknown Source)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.runPeriodic(Unknown Source)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)

# cat /proc/net/dev
Inter-| Receive | Transmit
face |bytes packets errs drop fifo frame compressed multicast|bytes packets errs drop fifo colls carrier compressed
dummy0: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
bond0: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
eth0: 57404665380 80272061 0 0 0 0 0 0 27256584702 67807599 0 0 0 0 0 0
ip6tnl0: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
lo: 38914419704 20880706 0 0 0 0 0 0 38914419704 20880706 0 0 0 0 0 0
sit0: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
tunl0: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

Viewing all 74 articles
Browse latest View live